#1 Trusted Cybersecurity News Platform
Followed by 4.50+ million
The Hacker News Logo
Subscribe – Get Latest News
Cybersecurity

AI Security | Breaking Cybersecurity News | The Hacker News

Category — AI Security
5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

5 Actionable Steps to Prevent GenAI Data Leaks Without Fully Blocking AI Usage

Oct 01, 2024 Generative AI / Data Protection
Since its emergence, Generative AI has revolutionized enterprise productivity. GenAI tools enable faster and more effective software development, financial analysis, business planning, and customer engagement. However, this business agility comes with significant risks, particularly the potential for sensitive data leakage. As organizations attempt to balance productivity gains with security concerns, many have been forced to choose between unrestricted GenAI usage to banning it altogether. A new e-guide by LayerX titled 5 Actionable Measures to Prevent Data Leakage Through Generative AI Tools is designed to help organizations navigate the challenges of GenAI usage in the workplace. The guide offers practical steps for security managers to protect sensitive corporate data while still reaping the productivity benefits of GenAI tools like ChatGPT. This approach is intended to allow companies to strike the right balance between innovation and security. Why Worry About ChatGPT? The e
ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

ChatGPT macOS Flaw Could've Enabled Long-Term Spyware via Memory Function

Sep 25, 2024 Artificial Intelligence / Vulnerability
A now-patched security vulnerability in OpenAI's ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool's memory. The technique, dubbed SpAIware , could be abused to facilitate "continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions," security researcher Johann Rehberger said . The issue, at its core, abuses a feature called memory , which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month. What it does is essentially allow ChatGPT to remember certain things across chats so that it saves users the effort of repeating the same information over and over again. Users also have the option to instruct the program to forget something. "ChatGPT's memories evolve with your interactions and aren't linked to s
The Secret Weakness Execs Are Overlooking: Non-Human Identities

The Secret Weakness Execs Are Overlooking: Non-Human Identities

Oct 03, 2024Enterprise Security / Cloud Security
For years, securing a company's systems was synonymous with securing its "perimeter." There was what was safe "inside" and the unsafe outside world. We built sturdy firewalls and deployed sophisticated detection systems, confident that keeping the barbarians outside the walls kept our data and systems safe. The problem is that we no longer operate within the confines of physical on-prem installations and controlled networks. Data and applications now reside in distributed cloud environments and data centers, accessed by users and devices connecting from anywhere on the planet. The walls have crumbled, and the perimeter has dissolved, opening the door to a new battlefield: identity . Identity is at the center of what the industry has praised as the new gold standard of enterprise security: "zero trust." In this paradigm, explicit trust becomes mandatory for any interactions between systems, and no implicit trust shall subsist. Every access request, regardless of its origin,
Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

Aug 27, 2024 AI Security / Vulnerability
Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling. " ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface," security researcher Johann Rehberger said . "This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!" The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps - Trigger prompt injection via malicious content concealed in a document shared on the chat to seize control of the chatbot Using a prompt injection payload to instruct Copilot to search for more emails and documents, a technique called automatic too
cyber security

The State of SaaS Security 2024 Report

websiteAppOmniSaaS Security / Data Security
Learn the latest SaaS security trends and discover how to boost your cyber resilience. Get your free…
Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

Researchers Uncover Vulnerabilities in AI-Powered Azure Health Bot Service

Aug 13, 2024 Healthcare / Vulnerability
Cybersecurity researchers have discovered two security flaws in Microsoft's Azure Health Bot Service that, if exploited, could permit a malicious actor to achieve lateral movement within customer environments and access sensitive patient data. The critical issues, now patched by Microsoft, could have allowed access to cross-tenant resources within the service, Tenable said in a new report shared with The Hacker News. The Azure AI Health Bot Service is a cloud platform that enables developers in healthcare organizations to build and deploy AI-powered virtual health assistants and create copilots to manage administrative workloads and engage with their patients. This includes bots created by insurance service providers to allow customers to look up the status of a claim and ask questions about benefits and services, as well as bots managed by healthcare entities to help patients find appropriate care or look up nearby doctors. Tenable's research specifically focuses on on
Experts Uncover Severe AWS Flaws Leading to RCE, Data Theft, and Full-Service Takeovers

Experts Uncover Severe AWS Flaws Leading to RCE, Data Theft, and Full-Service Takeovers

Aug 09, 2024 Cloud Security / Data Protection
Cybersecurity researchers have discovered multiple critical flaws in Amazon Web Services (AWS) offerings that, if successfully exploited, could result in serious consequences. "The impact of these vulnerabilities range between remote code execution (RCE), full-service user takeover (which might provide powerful administrative access), manipulation of AI modules, exposing sensitive data, data exfiltration, and denial-of-service," cloud security firm Aqua said in a detailed report shared with The Hacker News. Following responsible disclosure in February 2024, Amazon addressed the shortcomings over several months from March to June. The findings were presented at Black Hat USA 2024. Central to the issue, dubbed Bucket Monopoly, is an attack vector referred to as Shadow Resource, which, in this case, refers to the automatic creation of an AWS S3 bucket when using services like CloudFormation, Glue, EMR, SageMaker, ServiceCatalog, and CodeStar. The S3 bucket name created in
SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Jul 18, 2024 Cloud Security / Enterprise Security
Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data. The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz. "The vulnerabilities we found could have allowed attackers to access customers' data and contaminate internal artifacts – spreading to related services and other customers' environments," security researcher Hillai Ben-Sasson said in a report shared with The Hacker News. Following responsible disclosure on January 25, 2024, the weaknesses were addressed by SAP as of May 15, 2024. In a nutshell, the flaws make it possible to obtain unauthorized access to customers' private artifacts and credentials to cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud. They could also be used to modify D
Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Prompt Injection Flaw in Vanna AI Exposes Databases to RCE Attacks

Jun 27, 2024 Artificial Intelligence / Vulnerability
Cybersecurity researchers have disclosed a high-severity security flaw in the Vanna.AI library that could be exploited to achieve remote code execution vulnerability via prompt injection techniques. The vulnerability, tracked as CVE-2024-5565 (CVSS score: 8.1), relates to a case of prompt injection in the "ask" function that could be exploited to trick the library into executing arbitrary commands, supply chain security firm JFrog said . Vanna is a Python-based machine learning library that allows users to chat with their SQL database to glean insights by "just asking questions" (aka prompts) that are translated into an equivalent SQL query using a large language model (LLM). The rapid rollout of generative artificial intelligence (AI) models in recent years has brought to the fore the risks of exploitation by malicious actors, who can weaponize the tools by providing adversarial inputs that bypass the safety mechanisms built into them. One such prominent clas
How to Accelerate Vendor Risk Assessments in the Age of SaaS Sprawl

How to Accelerate Vendor Risk Assessments in the Age of SaaS Sprawl

Mar 21, 2024 SaaS Security / Endpoint Security
In today's digital-first business environment dominated by SaaS applications, organizations increasingly depend on third-party vendors for essential cloud services and software solutions. As more vendors and services are added to the mix, the complexity and potential vulnerabilities within the  SaaS supply chain  snowball quickly. That's why effective vendor risk management (VRM) is a critical strategy in identifying, assessing, and mitigating risks to protect organizational assets and data integrity. Meanwhile, common approaches to vendor risk assessments are too slow and static for the modern world of SaaS. Most organizations have simply adapted their legacy evaluation techniques for on-premise software to apply to SaaS providers. This not only creates massive bottlenecks, but also causes organizations to inadvertently accept far too much risk. To effectively adapt to the realities of modern work, two major aspects need to change: the timeline of initial assessment must shorte
Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

Mar 04, 2024 AI Security / Vulnerability
As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a  pickle file  leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims' machines through what is commonly referred to as a 'backdoor,'" senior security researcher David Cohen  said . "This silent infiltration could potentially grant access to critical internal systems and pave the way for large-scale data breaches or even corporate espionage, impacting not just individual users but potentially entire organizations across the globe, all while leaving victims utterly unaware of their compromised state." Specifically, the rogue model initiates a reverse shell connection to 210.117.212[.]93, an IP address that belongs to the Korea Research
Three Tips to Protect Your Secrets from AI Accidents

Three Tips to Protect Your Secrets from AI Accidents

Feb 26, 2024 Data Privacy / Machine Learning
Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the " OWASP Top 10 For Large Language Models ," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this article about four items in that top 10 that are most able to contribute to the accidental disclosure of secrets such as passwords, API keys, and more. We're already aware that LLMs can reveal secrets because it's happened. In early 2023, GitGuardian reported it found over 10 million secrets in public Github commits. Github's Copilot AI coding tool was trained on public commits, and in September of 2023, researchers at the University of Hong Kong published a paper on how they created an algorithm that generated 900 prompts designed to get Copilot to reveal secrets from
NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

Jan 08, 2024 Artificial Intelligence / Cyber Security
The U.S. National Institute of Standards and Technology (NIST) is calling attention to the  privacy and security challenges  that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. "These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to adversely affect the performance of the AI system, and even malicious manipulations, modifications or mere interaction with models to exfiltrate sensitive information about people represented in the data, about the model itself, or proprietary enterprise data," NIST  said . As AI systems become integrated into online services at a rapid pace, in part driven by the emergence of generative AI systems like OpenAI ChatGPT and Google Bard, models powering these technologies face a number of threats at various stages of the machine learning operations. These include corrupted training data, security flaw
AI Solutions Are the New Shadow IT

AI Solutions Are the New Shadow IT

Nov 22, 2023 AI Security / SaaS Security
Ambitious Employees Tout New AI Tools, Ignore Serious SaaS Security Risks Like the  SaaS shadow IT  of the past, AI is placing CISOs and cybersecurity teams in a tough but familiar spot.  Employees are covertly using AI  with little regard for established IT and cybersecurity review procedures. Considering  ChatGPT's meteoric rise to 100 million users within 60 days of launch , especially with little sales and marketing fanfare, employee-driven demand for AI tools will only escalate.  As new studies show  some workers boost productivity by 40% using generative AI , the pressure for CISOs and their teams to fast-track AI adoption — and turn a blind eye to unsanctioned AI tool usage — is intensifying.  But succumbing to these pressures can introduce serious SaaS data leakage and breach risks, particularly as employees flock to AI tools developed by small businesses, solopreneurs, and indie developers. AI Security Guide Download AppOmni's CISO Guide to AI Security - Part 2 AI e
Expert Insights / Articles Videos
Cybersecurity Resources