AI has had a remarkable year in 2024. While businesses are increasingly embracing AI, malevolent individuals are finding new methods to infiltrate systems through sophisticated attacks.

Given the rapid advancements in the AI field, it’s essential to review the past before looking towards the future. Here are the top five AI security narratives of 2024.

Are you able to listen? Cyber criminals take over audio through AI

Hackers can fabricate entire conversations utilizing large language models (LLMs), voice replication, and speech-to-text tools. While this approach is fairly detectable, researchers from IBM X-Force conducted an experiment to assess the feasibility of capturing and altering segments of conversations in real-time.

They confirmed not only the possibility but also the ease of accomplishing this task. In their experiment, they utilized the phrase “bank account” where the LLM replaced any mention of a bank account number with a false one.

This application of AI made it challenging to detect, providing attackers with a means to compromise crucial data discreetly.

Rapid detection: Novel security utilities identify AI assaults in under a minute

Minimizing ransomware risks remains a crucial focus for IT teams in enterprises. However, generative AI (gen AI) and LLMs are complicating this effort, as cybercriminals use generative technology to create phishing emails and LLMs to carry out routine script tasks.

New security tools, such as AI-integrated cloud security platforms and IBM’s FlashCore Module, offer augmented detection capabilities that aid security teams in recognizing potential threats within a minute.

Examine AI cybersecurity solutions

Path to safeguarding — outlining the implications of AI breaches

The IBM Institute for Business Value discovered that 84% of CEOs are apprehensive about broad-scale or catastrophic attacks attributed to gen AI.

To fortify networks, software, and other digital assets, it’s imperative for organizations to comprehend the possible consequences of AI breaches, which include:

  • Swift infiltration: Attackers fabricate harmful inputs to surpass system regulations for executing unintended actions.
  • Data contamination: Adversaries manipulate training data to introduce vulnerabilities or alter the model’s behavior.
  • Model replication: Malicious entities scrutinize the inputs and operations of an AI model with the objective of duplicating it, putting enterprise intellectual property at risk.

The IBM Framework for Securing AI can assist clients, partners, and global entities in better navigating the evolving threat environment and identifying protective strategies.

ChatGPT 4 quickly exploits vulnerabilities within a day

In a study involving 15 day-zero vulnerabilities, it was revealed that ChatGPT 4 adeptly exploited them 87% of the time. These vulnerabilities consisted of susceptible websites, container management tools, and Python packages.

Additionally, the effectiveness of ChatGPT 4 attacks significantly increased when the LLM had access to the CVE description. Without this data, the success rate plummeted to a mere 7%. Notably, other LLMs and open-source vulnerability scanners couldn’t exploit any day-zero vulnerabilities, even with access to CVE data.

NIST study: AI vulnerable to prompt injection threats

In a recent NIST report named Adversarial Machine Learning: A Categorization and Vocabulary of Attacks and Countermeasures, it was noted that prompt injection poses substantial risks for large language models.

There are two types of prompt injection: Direct and indirect. Direct attacks involve cybercriminals inserting text prompts that result in unintended or unauthorized actions. A prevalent technique is DAN (Do Anything Now), which compels AI models to engage in various activities, including illicit actions. DAN is currently in version 12.0.

On the other hand, indirect attacks revolve around manipulating compromised source data. Attackers create PDFs, webpages, or audio files that are ingested by LLMs, thereby influencing their output. Given that AI models rely on continuous data ingestion and evaluation for enhancement, indirect prompt injection is considered gen AI’s principal security vulnerability due to the challenges in identifying and rectifying such attacks.

Remaining vigilant about AI

As AI surged into the mainstream, 2024 witnessed a significant spike in security apprehensions. With gen AI and LLMs advancing rapidly, 2025 is poised to bring more of the same, especially as enterprises increasingly embrace these technologies.

Consequently, companies need to keep a close watch on AI solutions and stay informed about the latest developments in intelligent security measures.