Home
Blog
Is AI Fueling Malware? The Real Risk to Organizations

Is AI Fueling Malware? The Real Risk to Organizations

Avatar photo
Hwei Oh
02/12/2025
Share this article:

AI is no stranger to hype, ranging from unlikely claims of above-human intelligence to fears that AI will take over all human jobs. One of the more alarming claims is that AI will start generating brand-new strains of malware that existing security solutions are incapable of dealing with.

The claims are mostly exaggerated. However, AI does pose new risk types and factors that companies and security providers alike should consider. For example, adversarial AI—the practice of injecting malicious prompts to exfiltrate data or tamper with the underlying model—is a real threat to organizations. The use of AI to create believable phishing campaigns is another valid concern.

We’re here to strip away the hype from the facts regarding AI and malware so you can take the appropriate steps to protect your business.

AI is not advanced enough to develop threatening malware

Speaking at the UN Security Council on Dec 19, 2024, Chief AI Scientist at Meta, Yann LeCun, said, “Current AI Systems do not understand the real world, do not have persistent memory, and cannot really reason and plan. They cannot learn new skills with the same speed and efficiency as humans or even animals.”

Understandably, people might believe otherwise. In an effort to get more funding, prominent AI speakers have been heavily pushing the idea that we’re on the verge of achieving AGI—artificial general intelligence, the point where machines can think and reason like humans. The truth is that AGI is at least 10 years away, according to Gartner. 

Also, despite claims that AI’s work is original, evidence continues to mount that it’s just doing a sophisticated copy of its underlying training data.

The same is true of malware.  Art might be subjective, but science isn’t, and evidence indicates that generative AI is incapable of creating any new original malware. One and a half years after ChatGPT’s release, Google still hasn’t found a single instance of a “new type of malware” created by AI.

That said, the malware it does create —based on training data of existing malware—is still dangerous, because AI reduces the barrier to entry for wannabe hackers. Instead of requiring a computer science education to write sophisticated malware, inexperienced hackers can now simply generate malware with a few prompts.

This is precisely what happened when HP discovered what looked like AI-generated malware in the wild. The malware’s code was open for anyone to see—a noteworthy sign of an amateur at work—and it utilized open-source tools to deliver a payload. The researchers assumed the threat actor was new to open-source attack tools as well as AI.

The Black Mamba polymorphic malware hype

A proof-of-concept polymorphic malware generated by AI called Black Mamba raised some eyebrows when it was first released. Polymorphism—the ability of software to modify itself—was one of the factors that caused the security industry to move away from the antivirus/firewall paradigm into the more modern EDR (endpoint detection and response) and MDR (managed detection and response) paradigms.

The fact that AI generated this malware is part of why there’s such a doom and gloom narrative when it comes to AI-generated malware. However, polymorphic malware has been around since 1990 and polymorphism is a common attribute of malware. It’s not some original development made by a super-intelligent AI.

Effective EDR solutions use behavioral analysis to determine anomalies whereas antiviruses and firewalls work on signature databases of known malware to get around polymorphic malware. Even if AI did suddenly develop some new type of malware, modern defense tools would still pick up on the anomalous behavior and flag it.  

Cybersecurity companies are aware of these types of threats and have the tools in place to handle them. Whether malware was created by AI or by a human is irrelevant. The only important question is whether tools exist to catch that malware.

AI can facilitate malware-driven attacks

Whereas the AI threat appears overhyped when it comes to generating original malware, another very real risk does exist: AI makes it easier for anyone to execute well-known attack types, such as phishing and social engineering.

For example, since ChatGPT’s launch in November 2022, YouTube has seen an increase of up to 300% in videos that contain links to info-stealing malware. Many of these videos are generated using generative AI tools, such as ChatGPT, for the script, AI models for the voiceovers and “actors,” or full-scale video generation software.

Videos with human-looking AI models are more likely to be effective because people tend to trust them more.

The videos pretend to be tutorials for well-known software, such as Adobe Photoshop. They offer the software “for free” but the links are malicious and carry out the rest of the attack. In the past, these kinds of phishing attacks weren’t possible without investing a lot of effort and time, but AI puts this type of attack in the hands of almost any beginner. AI empowers more threat actors to fuel phishing attacks that drop malware on a device, leading to data exfiltration, data breaches, or APTs (advanced persistent threats).

Another real use case is AI-generated websites to deploy “malvertising” search ads that bypass Google safety processes into believing a site is legit when it’s just a doorway to malware. Hackers create ads to drive traffic to these malicious websites and then guide users to a download or phishing form designed to steal credentials.

The sites typically look somewhat odd when viewed by a human, with the usual distortions and “not quite right” aspects of AI-generated images and text. However, robots can’t tell the difference, and the “malvertisers” get away with their crimes.

Hackers can also use AI to write convincing phishing emails or carry out entire email-based conversations using AI-generated copy. AI’s ability to determine sentiment helps it generate compelling text for anyone not trained to detect the telltale signs of AI-generated copy, significantly raising the potential success of an email phishing campaign.

A more sophisticated use case is deepfakes where hackers can use AI to generate videos or voices that look and sound like real humans. The technology for video deepfakes still isn’t widely available, but it does exist. Its effectiveness also depends on how much footage one has of the person being impersonated. Even so, enough tools do exist to make this a very real risk indeed.

Similar to images and text, deepfakes have telltale signs that experienced users are likely to detect. However, the technology for this is developing fast, and companies must be aware of it to fully protect themselves.

Organizations must address risks holistically

AI isn’t developing malware that no one’s ever seen before. Nevertheless, it can generate malware just as humans can, so existing tools are enough to combat any AI-generated threats. EDR/MDR is essential because any new malware won’t match malware signatures in an antivirus or firewall. However, these tools existed long before AI came around. The trusted methods that have kept organizations secure so far will continue to keep them secure in an AI era.

The essential area to strengthen is awareness. Human detection of anomalous events will become more important than ever.

First, employees must know how AI can be used to deliver malware—such as phishing emails with compelling copy, fake websites, deepfake videos and voices, and so on. Simply knowing that AI can be used this way is often enough to prevent employees from becoming victims.

Second, employees should know what steps to take to avoid falling prey to such attacks. Here, tried and tested awareness training methods already exist that can help deliver effective results.

It’s possible to deploy in-house AI tools to improve detection. However, these aren’t enough by themselves. They should always be used in tandem with human-led teams. For example, AI tools on their own can lead to too many alerts. They can also leave holes open if a human doesn’t monitor them for anomalies.

AI tools are just that—another tool, and any tool works best in human hands.

In an AI world, SolCyber provides all the services necessary for comprehensive cyber protection, from awareness training to 24/7 proactive security monitoring by Level 2+ security analysts with a wide breadth of expertise to full incident response and remediation services.

To learn more about how SolCyber can help you build stronger cyber resilience against emerging and existing threats, reach out to us today.

LEARN MORE ABOUT AI AND CYBERSECURITY:

Is AI Fueling Malware? The Real Risk to Organizations - SolCyber
Avatar photo
Hwei Oh
02/12/2025
Share this article:

Table of contents:

The world doesn’t need another traditional MSSP 
or MDR or XDR.

What it requires is practicality and reason.

Related articles

Businesses don’t need more security tools; they need transparent, human-managed cybersecurity and a trusted partner who ensures nothing is hidden.

It’s time to move beyond the inadequacies of current managed services and experience true security management.
No more paying for useless bells and whistles.
No more time wasted on endless security alerts.
No more dealing with poor automated services.
No more services that only detect but don’t respond.
No more breaches caused by all of the above.

Follow us!

Subscribe

Join our newsletter to stay up to date on features and releases.

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.

CONTACT
©
2025
SolCyber. All rights reserved
|
Made with
by
Jason Pittock

I am interested in
SolCyber XDR++™

I am interested in
SolCyber MDR++™

I am interested in
SolCyber Extended Coverage™

I am interested in
SolCyber Foundational Coverage™

I am interested in a
Free Demo

10670