
AI dominates headlines and newsfeeds these days, and AI-related cybersecurity headlines are no exception. Claims of AI-driven malware and automated attacks fuel fear, uncertainty, and doubt (FUD). Most of these articles are created as pure clickbait, either to draw eyes to the article itself (and the ads interspersed through it) or to sell some alleged silver-bullet solution.
Despite the hype, the narratives exaggerate AI’s capabilities. They portray it as a new, self-aware superweapon when it’s nothing of the sort. The core question is, “Are AI-powered cyberattacks an existential risk, or are they just the next step in the ongoing cybersecurity war?”
Businesses do face real risks from AI-enhanced attacks, but the solution lies in a foundational defense rather than protection against novel Hollywood-style threats.
AI is everywhere in the cybersecurity conversation, from deepfakes and phishing to fully automated malware.
Cyberattacks have always evolved with technology. For example, the first “cyberattack” is often credited to two French bankers in 1834, who found a way to manipulate a visual telegraph system to obtain financial information days earlier than fellow traders. Without the visual telegraph system, their “hack” would have been impossible.
Similarly, mass email systems helped VBScript and PDF-JavaScript malware spread. The internet itself has been a boon for hackers, and so has the shift toward cloud services.
AI represents the latest phase in this cycle, a new technology that creates opportunities for both white and black hats. The primary “opportunities” AI has created for threat actors are a reduced barrier to entry and the use of LLM-powered instructions that can execute commands directly on a target machine.
However, we haven’t yet seen AI malware that can reason, and, likely, we never will.
The fear is being driven mostly by hype in the media and vested interests that want more eyeballs on their content or their product. Much of it is also driven by misunderstanding how AI works, and accepting at face value every hyperbolic claim made by tech billionaires on X or elsewhere.
Media narratives aren’t helping. They portray AI as a new “superweapon”—again, mostly for the purposes of generating clicks, or because the journalists themselves don’t understand the underlying fundamentals of how generative AI tools work.
The media often portrays AI as a transformative force in cybercrime. Headlines warn of “unstoppable” AI-powered malware. Many articles are sensationalized and given a modicum of credibility by including quotes from industry experts.
AI has been in the wild for several years, with more and more claims from AI companies about how good their latest AI models are at “reasoning.”
Despite this, the security industry hasn’t seen any novel malware strains created by an AI. Not one.
AI facilitates cyberattacks in two ways:
AI gives any non-coder the ability to replicate or modify known strains of malware. Anthropic recently produced a report that contains several examples of threat actors using Claude Code to advance malware capabilities. For example, AI has created Ransomware-as-a-Service (RaaS) models, the report says, citing one threat actor who “leveraged Claude to develop, market, and distribute ransomware with advanced evasion capabilities.”
The second realistic use case for malware involves deploying LLM instructions that dynamically generate scripts directly on the victim endpoint, rather than a malicious payload. This dynamic script generation has the benefit of being able to adapt to the target environment while also evading detection by traditional mechanisms.
However, self-reasoning AI payloads that can evade existing detection tools are pure science fiction at the moment, and will likely remain so. Not only do all existing LLMs lack true reasoning power, but the processing power required to run them is gargantuan and far beyond what can be delivered in a self-contained payload.
Social-engineering, phishing, identity-based attacks, ransomware, and other “age-old” threats remain an organization’s priority. AI predominantly enhances these attack vectors, so protecting your organization against them will take you 90% of the way against any AI threats.
AI can enhance phishing campaigns by generating highly convincing emails or social media messages in multiple languages. More targeted attack vectors, such as spearphishing, can leverage AI’s learning capabilities to craft compelling messages that convince recipients that the messages are legitimate.
Deepfakes powered by AI are becoming more realistic, such as threat actors using AI to find work in target companies, thus establishing themselves as an insider threat. The phenomenon gained increased press coverage in mid-2025, specifically regarding North Korean operatives targeting US tech companies through deepfakes.
We already talked about the use of local LLMs to generate malicious scripts, and this is probably the one truly “new” use case for AI-driven attacks. Protecting against them might require some new methods, such as building guardrails around local LLMs.
However, any malicious script would ultimately be picked up by traditional EDM (electronic direct mail) tools using behavioral analytics to detect anomalies. So, even here, the reality is that AI-powered attacks tend to succumb to a robust security posture.
Organizations are as ready as their existing security suite is up to date.
Many organizations fail to consistently apply software patches or invest in a robust security posture, which leaves them vulnerable to any type of attack, not only AI-driven ones.
MFA adoption remains inconsistent, making it easier for phishing campaigns to succeed. Endpoint monitoring is also often inadequate, allowing any type of malware—AI-generated or human-generated—to operate undetected.
Any AI-specific protections are secondary to foundational security. Advanced “Hollywood-level” malware remains in the realm of science fiction.
Current AI-driven threats, while sophisticated, don’t match the apocalyptic scenarios of self-evolving malware depicted in the media. AI enhances existing attack methods but hasn’t created an entirely new generation of malware that the security industry is totally unprepared for.
So, yes, businesses with the right foundational protection are indeed as ready as possible against any form of attack.
Companies should focus on less fear and more readiness, stressing strong cyber hygiene, employee awareness, and possibly even working with an experienced MSSP (managed security service provider).
Timely updates, enforcing MFA, regular backups, network segmentation, and an effective EDR solution will go most of the way in ensuring security in an AI age.
Businesses should focus on training employees to detect AI in emails and images, or at least to detect anomalies in interactions with external parties, especially those making demands for funds.
Companies can also implement AI-driven defense tools, provided those tools include human oversight. Our experience has shown that excessive reliance on AI for defense inevitably leads to either too many false positives and alert fatigue, or a too lax approach to threats. Humans should always be the final decision makers when using AI tools for cybersecurity.
Traditional security solutions leveraged machine learning to detect behavioral anomalies long before generative AI came around. Traditional, logic-based, explainable AI remains a mainstay of any solid security posture.
The best margin of safety is getting the basics right, prioritizing foundational security practices, and taking proactive measures to reduce your potential attack surface. Investing in fundamentals now, prepares your business for any eventual AI advancements.
The fastest way to get up and running with a robust security posture is to call in an MSSP.
MSSPs offer expertise in monitoring and responding to all security threats, whether they’re AI-driven or not. An MSSP can either compensate for a lack of in-house skills or augment them, providing 24/7 threat intelligence and response.
Partnering with an MSS ensures immediate access to advanced tools and strategies.
If you’d like to know more about SolCyber’s MSSP offerings, reach out to us for a chat.
Photo by Veit Hammer on Unsplash

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.






