In professional discussions, the question of whether artificial intelligence (AI) could become a tool for cybercrime is increasingly gaining attention. While the media sometimes resorts to exaggerated claims, the reality is more nuanced and demands a balanced understanding.
One recent study, for instance, showed that an open-source language model was able to partially bypass Microsoft Defender’s advanced protection system. In the experiment, researchers trained an AI model on a relatively modest budget to identify and circumvent the logic behind the security software. Although the success rate was not particularly high—about 8% of attempts went undetected—it nevertheless serves as a cautionary signal for cybersecurity professionals.
It’s important to emphasize, however, that this does not mean current security systems are broadly vulnerable or obsolete. The research was a proof of concept, intended to demonstrate that AI can acquire capabilities previously reserved for highly skilled human attackers. At the same time, defensive technologies are evolving as well, with security vendors constantly updating their tools to respond to emerging threats.
One of the more concerning developments involves the rise of polymorphic AI-generated malware. These malicious programs use AI to dynamically rewrite or disguise their code each time they are executed or compiled. Unlike traditional polymorphic malware, which often relies on packing or encryption to change its appearance, AI-driven polymorphism produces behaviorally consistent yet structurally unique code every time it runs. This makes signature-based detection significantly more difficult, as each instance of the malware may look different even though it acts the same.
A notable example is the prototype known as BlackMamba, which functions as a keylogger. Developed by HYAS Labs, the malware uses OpenAI’s GPT model to generate its code at runtime. The keylogging functionality never writes to disk; instead, it operates entirely in memory, with its base64-encoded code executed via Python’s exec()
function. This makes it especially difficult for traditional antivirus tools, which often rely on file-based scanning, to detect. While the demonstration wasn’t intended as a real-world threat, it clearly illustrates how AI can be leveraged to evade conventional security measures.
Beyond the technical aspects, major AI developers like Microsoft and OpenAI are actively monitoring how malicious actors attempt to exploit AI. According to their joint research, most cybercriminals currently use AI not to create novel attacks, but to boost productivity—helping them write code, conduct reconnaissance, or craft more convincing social engineering messages. To date, there is little evidence that AI has enabled radically new or fully autonomous attack strategies.
Nonetheless, several state-sponsored groups—including actors from Russia, China, North Korea, and Iran—have begun integrating AI into their cyber operations. These groups mainly use AI for information gathering, script development, and improving attack efficiency. However, there is no indication that they are deploying AI systems capable of operating independently.
In this sense, artificial intelligence has not transformed cybercrime, but it has enhanced its efficiency. This does not diminish the need for caution. As attackers adopt increasingly sophisticated methods, defenders must also evolve. AI-powered security tools are emerging that analyze not only code structure but also program behavior, offering a new line of defense.
Still, the most effective safeguards remain grounded in basic cybersecurity principles: multi-factor authentication, cautious user behavior, and a zero trust approach that avoids assuming any system, user, or device is inherently secure.
As artificial intelligence continues to advance, cybersecurity enters a new phase. The key question is not whether AI poses a threat, but how we can use it responsibly and wisely—ensuring that defenders, not just attackers, benefit from the technology.