#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

machine learning | Breaking Cybersecurity News | The Hacker News

Category — machine learning
(Cyber) Risk = Probability of Occurrence x Damage

(Cyber) Risk = Probability of Occurrence x Damage

May 15, 2024 Threat Detection / Cybersecurity
Here's How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while presenting a revised scoring system for a more comprehensive evaluation. It further emphasizes the importance of considering environmental and threat metrics alongside the base score to assess vulnerabilities accurately. Why Does It Matter? The primary purpose of the CVSS is to evaluate the risk associated with a vulnerability. Some vulnerabilities, particularly those found in network products, present a clear and significant risk as unauthenticated attackers can easily exploit them to gain remote control over affected systems. These vulnerabilities have frequently been exploited over the years, often ser
6 Mistakes Organizations Make When Deploying Advanced Authentication

6 Mistakes Organizations Make When Deploying Advanced Authentication

May 14, 2024 Cyber Threat / Machine Learning
Deploying advanced authentication measures is key to helping organizations address their weakest cybersecurity link: their human users. Having some form of 2-factor authentication in place is a great start, but many organizations may not yet be in that spot or have the needed level of authentication sophistication to adequately safeguard organizational data. When deploying advanced authentication measures, organizations can make mistakes, and it is crucial to be aware of these potential pitfalls.  1. Failing to conduct a risk assessment A comprehensive risk assessment is a vital first step to any authentication implementation. An organization leaves itself open to risk if it fails to assess current threats and vulnerabilities, systems and processes or needed level of protections required for different applications and data.  Not all applications demand the same levels of security. For example, an application that handles sensitive customer information or financials may require stro
The Secret Weakness Execs Are Overlooking: Non-Human Identities

The Secret Weakness Execs Are Overlooking: Non-Human Identities

Oct 03, 2024Enterprise Security / Cloud Security
For years, securing a company's systems was synonymous with securing its "perimeter." There was what was safe "inside" and the unsafe outside world. We built sturdy firewalls and deployed sophisticated detection systems, confident that keeping the barbarians outside the walls kept our data and systems safe. The problem is that we no longer operate within the confines of physical on-prem installations and controlled networks. Data and applications now reside in distributed cloud environments and data centers, accessed by users and devices connecting from anywhere on the planet. The walls have crumbled, and the perimeter has dissolved, opening the door to a new battlefield: identity . Identity is at the center of what the industry has praised as the new gold standard of enterprise security: "zero trust." In this paradigm, explicit trust becomes mandatory for any interactions between systems, and no implicit trust shall subsist. Every access request, regardless of its origin,
Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

May 01, 2024 Financial Crime / Forensic Analysis
A forensic analysis of a graph dataset containing transactions on the Bitcoin blockchain has revealed clusters associated with illicit activity and money laundering, including detecting criminal proceeds sent to a crypto exchange and previously unknown wallets belonging to a Russian darknet market. The  findings  come from Elliptic in collaboration with researchers from the MIT-IBM Watson AI Lab. The 26 GB dataset, dubbed  Elliptic2 , is a "large graph dataset containing 122K labeled subgraphs of Bitcoin clusters within a background graph consisting of 49M node clusters and 196M edge transactions," the co-authors  said  in a paper shared with The Hacker News. Elliptic2 builds on the  Elliptic Data Set  (aka Elliptic1), a transaction graph that was made public in July 2019 with the goal of  combating financial crime  using graph convolutional neural networks ( GCNs ). The idea, in a nutshell, is to uncover unlawful activity and money laundering patterns by taking advanta
cyber security

The State of SaaS Security 2024 Report

websiteAppOmniSaaS Security / Data Security
Learn the latest SaaS security trends and discover how to boost your cyber resilience. Get your free…
U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

Apr 30, 2024 Machine Learning / National Security
The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)  said  Monday. In addition, the agency said it's working to facilitate safe, responsible, and trustworthy use of the technology in a manner that does not infringe on individuals' privacy, civil rights, and civil liberties. The new guidance concerns the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such tools that could result in unintended consequences, necessitating the need for transparency and secure by design practices to evaluate and mitigate AI risks. Specifically, this spans four diffe
Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Apr 29, 2024 Mobile Security / Hacking
Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023, we prevented 2.28 million policy-violating apps from being published on Google Play in part thanks to our investment in new and improved security features, policy updates, and advanced machine learning and app review processes," Google's Steve Kafka, Khawaja Shams, and Mohet Saxena said . "To help safeguard user privacy at scale, we partnered with SDK providers to limit sensitive data access and sharing, enhancing the privacy posture for over 31 SDKs impacting 790K+ apps." In comparison, Google  fended off 1.43 million bad apps  from being published to the Play Sto
AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Apr 15, 2024 Secure Coding / Artificial Intelligence
Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on a larger role in software development is one of the big uncertainties related to this brave new world. In an era where AI promises to revolutionize how we live and work, the conversation about its security implications cannot be sidelined. As we increasingly rely on AI for tasks ranging from mundane to mission-critical, the question is no longer just, "Can AI  boost cybersecurity ?" (sure!), but also "Can AI  be hacked? " (yes!), "Can one use AI  to hack? " (of course!), and "Will AI  produce secure software ?" (well…). This thought leadership article is about the latter. Cydrill  (a
PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

Mar 29, 2024 Supply Chain Attack / Threat Intelligence
The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56 p.m. UTC. Software supply chain security firm Checkmarx said the unidentified threat actors behind flooding the repository targeted developers with typosquatted versions of popular packages. "This is a multi-stage attack and the malicious payload aimed to steal crypto wallets, sensitive data from browsers (cookies, extensions data, etc.), and various credentials," researchers Yehuda Gelb, Jossef Harush Kadouri, and Tzachi Zornstain  said . "In addition, the malicious payload employed a persistence mechanism to survive reboots." The findings were also c
GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

Mar 21, 2024 Machine Learning / Software Security
GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all  Advanced Security customers  to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by  GitHub Copilot  and  CodeQL , code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and Python, and delivers code suggestions shown to remediate more than two-thirds of found vulnerabilities with little or no editing," GitHub's Pierre Tempel and Eric Tooley  said . The capability,  first previewed  in November 2023, leverages a combination of CodeQL, Copilot APIs, and OpenAI GPT-4 to generate code suggestions. The Microsoft-owned subsidiary also said it plans to add support for more programming languages, including C# and Go, in the future. Code scanning autofix is designed to help developers resolve vulnerabilities as they code by generating potential fixes as well as providing
From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Mar 19, 2024 Generative AI / Incident Response
Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future  said  in a new report shared with The Hacker News. The findings are part of a red teaming exercise designed to uncover malicious use cases for AI technologies, which are  already being experimented  with by threat actors to create malware code snippets, generate phishing emails, and conduct reconnaissance on potential targets. The cybersecurity firm said it submitted to an LLM a known piece of malware called  STEELHOOK  that's associated with the APT28 hacking group, alongside its YARA rules, asking it to modify the source code to sidestep detection such that the original functionality remained intact and the generated source code wa
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

Feb 27, 2024 Supply Chain Attack / Data Security
Cybersecurity researchers have found that it's possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks. "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted through the conversion service," HiddenLayer  said  in a report published last week. This, in turn, can be accomplished using a hijacked model that's meant to be converted by the service, thereby allowing malicious actors to request changes to any repository on the platform by masquerading as the conversion bot. Hugging Face is a popular collaboration platform that helps users host pre-trained machine learning models and datasets, as well as build, deploy, and train them. Safetensors is a  format  devised by the company to store  tensors  keeping security in mind, as oppo
Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Feb 23, 2024 Red Teaming / Artificial Intelligence
Microsoft has released an open access automation framework called  PyRIT  (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team lead at Microsoft,  said . The company said PyRIT could be used to assess the robustness of large language model (LLM) endpoints against different harm categories such as fabrication (e.g., hallucination), misuse (e.g., bias), and prohibited content (e.g., harassment). It can also be used to identify security harms ranging from malware generation to jailbreaking, as well as privacy harms like identity theft. PyRIT comes with five interfaces: target, datasets, scoring engine, the ability to support multiple attack strategies, and incorporating a memory component that can either take the
Expert Insights / Articles Videos
Cybersecurity Resources