2024 has been a great year for artificial intelligence (AI). But as enterprises ramp up their adoption, malicious attackers are finding new ways to compromise systems with intelligent attacks.
The AI landscape is rapidly evolving, so before we move on, it’s worth looking back. Here are the top 5 AI security stories of 2024.
Can you hear me now? Hackers use AI to hijack audio
Attackers can use large-scale language models (LLMs), voice cloning, and speech-to-text software to forge entire conversations. However, this method is relatively easy to detect, so researchers at IBM X-Force conducted an experiment to see if parts of the conversation could be captured and replaced in real time.
They discovered that this is not only possible, but relatively easy to accomplish. In their experiment, they used the keyword “bank account.” Every time the speaker mentioned a bank account, the LLM was instructed to replace the listed bank account number with a fake number.
Due to limited use of AI, this technique is difficult to detect and provides a way for attackers to compromise critical data without being caught.
Moment of madness: New security tool detects AI attacks in less than 60 seconds
Mitigating ransomware risk remains a top priority for enterprise IT teams. However, generative AI (gen AI) and LLM make this difficult, as attackers use generative solutions to craft phishing emails and LLM to perform basic scripting tasks.
New security tools, such as cloud-based AI security and IBM’s FlashCore module, provide AI-enhanced detection, allowing security teams to detect potential attacks in less than 60 seconds.
Explore AI cybersecurity solutions
Pathway to protection — mapping the impact of AI attacks
of According to the IBM Institute for Business Value, 84% of CEOs We are concerned about widespread or catastrophic attacks related to generational AI.
To protect networks, software, and other digital assets, it is important for businesses to understand the potential impact of AI attacks, including:
Prompt injection: An attacker creates malicious input that overrides system rules and performs unintended actions. Data poisoning: Attackers tamper with training data to introduce vulnerabilities or change model behavior. Model extraction: Malicious attackers study the inputs and operations of AI models and attempt to replicate them, putting a company’s intellectual property at risk.
IBM Framework for Securing AI helps customers, partners, and organizations around the world better map the evolving threat landscape and identify paths to protection.
ChatGPT 4 quickly resolves one-day vulnerabilities
The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 was able to successfully exploit them 87% of the time. Issues of the day included vulnerable websites, container management software tools, and Python packages.
The better news? The ChatGPT 4 attack was much more effective when the LLM had access to the CVE description. Without this data, the attack’s effectiveness dropped to just 7%. It is also worth noting that other LLM and open source vulnerability scanners were unable to exploit the single-day issue, even when using CVE data.
NIST report: AI is trending toward prompt injection hacking
A recent NIST report, Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigation, found that instant injections pose significant risks to large-scale language models.
There are two types of prompt injection: direct and indirect. In a direct attack, a cybercriminal enters a text prompt that triggers an unintended or unauthorized action. One common prompt injection method is DAN (Do Anything Now). DAN asks the AI to “role-play” by telling the ChatGPT model that it is DAN, and DAN can do anything, including committing criminal acts. DAN is currently at least version 12.0.
Indirect attacks, on the other hand, focus on providing compromised source data. The attacker creates a PDF, web page, or audio file that is ingested by the LLM and modifies the AI output. Because AI models rely on continuous ingestion and evaluation of data for improvement, indirect prompt injection provides the greatest security for Gen AI since there is no easy way to find and fix these attacks. It is often considered a flaw in the above.
Focus on AI
As AI moves into the mainstream, security concerns have increased significantly in 2024. AI and LLM continue to evolve at a breakneck pace, and we expect to see more of the same in 2025, especially as enterprise adoption continues to increase.
result? Now more than ever, it’s important for businesses to stay on top of AI solutions and keep up with the latest intelligent security news.
read more