AI-Hallucinated Code Dependencies: The Emerging Software Supply Chain Risk
As AI-powered coding assistants become an integral part of software development, they also introduce new risks.
By Tim Uhlott|Last updated: October 26, 2025|3 minutes read
cybersecurity

As generative AI tools increasingly integrate into software development workflows, their ability to generate code snippets, entire modules, and even infrastructure configurations has accelerated productivity. However, a troubling trend is emerging: AI hallucination of code dependencies, where AI generates non-existent or unreliable packages, APIs, or libraries, introducing a new and insidious software supply chain risk.
This not only leads to development slowdowns and debugging nightmares but also opens a gateway for malicious actors to exploit hallucinated dependencies, potentially leading to malware infections, data leaks, or system compromises.
What Are AI-Hallucinated Dependencies?
AI hallucination occurs when a language model, such as GitHub Copilot or ChatGPT, produces confident but factually incorrect or non-existent information. In coding contexts, this can manifest as:- Suggesting package names that do not exist
- Calling functions or methods that aren't part of any official API
- Recommending outdated, deprecated, or vulnerable dependencies
- Linking to GitHub repositories or modules that have never existed
text-analyzer-helper from npm, even though no such package exists.
How Hackers Can Exploit This
If an attacker discovers that ChatGPT has recommended a non-existent package, they can quickly publish a malicious package under that name. The next time a user asks a similar question, ChatGPT might suggest the attacker’s now-real, but harmful, package. This tactic exploits AI-generated hallucinations, allowing attackers to distribute malicious code without relying on traditional methods like typosquatting or impersonating legitimate libraries. As a result, harmful code can end up in real applications or trusted repositories, posing a serious threat to the software supply chain. A developer seeking help from a tool like ChatGPT could unknowingly install a malicious library simply because the AI assumed it existed, and the attacker made that assumption true. Some attackers might even create functional, Trojan-like libraries that appear legitimate, increasing the chances they’ll be widely adopted before their true nature is uncovered. Once installed, these malicious packages can carry out a range of harmful actions. They may:- Steal environment variables or sensitive credentials
- Open backdoors to allow unauthorized access
- Silently log and exfiltrate data