How to stay secure in an AI attackers’ world.
Practical and actionable advice as co-hosts Duck and David reassure you that it’s not as hard as some people think.
If the media player above doesn’t work in your browser,
try clicking here to listen in a new browser tab.
Find TALES FROM THE SOC on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app. Or download this episode as an MP3 file and listen offline in any audio or video player.
[FX: PHONE DIALS]
[FX: PHONE RINGS, PICKS UP]
ETHEREAL VOICE. Hello, caller.
Get ready for TALES FROM THE SOC.
[FX: DRAMATIC CHORD]
DUCK. Welcome back, everybody, to TALES FROM THE SOC.
I am Paul Ducklin, joined by David Emerson, CTO and head of operations at SolCyber.
Hello, David.
DAVID. How’s it going?
DUCK. It’s going very well, David!
We are planning to talk today about what you might call, “Fear of AI in malware generation and cyberattacks.”
So our topic is: What to do about AI? How to stay secure in an AI attackers’ world.
It’s not as difficult as people seem to think, is it?
DAVID. No.
I don’t believe that it changes the game at all.
I think it speeds up the way in which the game is played.
DUCK. Admittedly for both sides, right?
So, in defense you would be foolish not to use AI style automation to try and classify threats from non-threats, given the insane rate at which new ones appear.
And likewise it would be naive not to expect crooks to make ever-increasing use of AI, say to generate code.
Not necessarily because they couldn’t do it themselves, but just it’s such a cheap and easy way of getting your code to be a bit different every time, right?
DAVID. Absolutely.
When I think of it offensively, defensively, I think that AI is broadly a force multiplier or a democratizer of things that used to be difficult.
There used to be fewer people that could perpetrate a certain exploit, especially on the offensive side.
And so AI turns those script kiddies that couldn’t have done it before into into Olympians, relatively speaking.
But it’s the same kill chain; it’s just a compressed timeline.
In the realm of phishing, in the realm of the most boring thing that we see on a regular basis…
…while phishing itself isn’t new, and actually even the methods haven’t changed, if they’re using AI, it’s just finally spell-checked and grammatically correct, and maybe able to masquerade with a regional formatting effect or whatever.
It doesn’t change the game; it just makes it a little bit more refined, a little bit new.
DUCK. Exactly, because what we used to see in the old days, before generative AI, was that well-informed phishing criminals simply copied and pasted legitimate text from legitimate companies, lightly modified.
DAVID. Right.
DUCK. They’ve always been able to do this.
AI maybe just makes it a little bit easier to make every message a little bit different, thereby making detection a little bit harder, unless you bring AI tactics to bear and discriminate automatically in most cases.
DAVID. [LAUGHS] Yes, using the word delve too much in its phishing messages.
DUCK. [LAUGHTER] Or using em-dashes.
DAVID. Yes.
DUCK. Or is it en-dashes?
I can never remember.
DAVID. Exactly.
On the defense side, this applies as well.
Not that AI is bad – certainly it can be a force multiplier on the defense side as well.
But AI detection is not useful if you were not patching in the first place.
There are fundamental practices that people continue to not do, that actually are the vulnerability.
No SOC, on the SOC side, ever failed because they had too little AI.
They failed because they ignored basics, because they didn’t train people to use what AI they had, let’s say.
It’s not going to fix technical debt; it’s just going to find it faster.
These are not fundamental changes in how the game is played.
And I think that that’s really important for everyone to understand.
You still have a discipline that you have to execute on, or you’ll have the pain of regret.
DUCK. And, to be clear, technical debt is really just a slightly more upbeat way of describing the shortcuts you took in the past that you know you should have fixed two years, one year, six months ago, but still keep putting off.
DAVID. Yes.
Don’t panic about AI.
Panic if you’ve been cutting corners.
I guess that’s a better way to put it, yes.
DUCK. If you look at the number of attacks against large and apparently important organizations that you think would have all the budget in the world to run a good patching program, and an on-the-ball SOC, the number of reports of intrusions that happened thanks to exploits from 2024, 2023, sometimes even earlier…
AI can advise you to fix it.
It can shout and scream at you to fix it.
It can even, if you have so-called agentic AI, perhaps try and get out and fix it for you.
But if you don’t have the infrastructure in place to make that work, you’re not going to get those patches.
You’re going to continue to be caught out, aren’t you?
DAVID. You will, yes.
And Patch Tuesday, for the competent and diligent… Patch Tuesday might have become Patch Right Now Or Else.
It isn’t a schedule anymore.
You just need to apply patches as soon as it’s practical, and on critical systems immediately.
But people weren’t doing Patch Tuesday all that well to begin with.
10 years ago, well before any of this kind of thing was mainstream (AI, that is), people were still patching poorly.
Fast forward 10 years; fast forward to 2025.
And if you’ve got a vulnerability that was patched in 2023 and you haven’t applied that patch, well, you know what?
AI is not the problem in your infrastructure.
That really is fundamentally the issue that we see.
Not any kind of sophistication – what we notice is that people are bad at fundamentals.
DUCK. Yes, and you don’t need AI to teach you that.
If you didn’t learn it back in – what was it? – 2017, when the WannaCry virus appeared about two months after Microsoft had pushed out a patch…
If you didn’t learn it then, you really ought to have learned it by now.
Two months is too long.
And if it was too long all those years ago, how much more too long is it today?
DAVID. Yes, it’s far too long.
At the end of the day, it’s “a little bit pregnant,” right?
You’ve still got a problem.
There’s just an absolute problem in your vulnerability to something that is actively being exploited.
DUCK. You still need some kind of human overview to deal with the human side of the problem.
Because if you look at some of the big news breaches that have happened recently… I’m thinking of things like Clorox in the US, and Marks and Spencer in the United Kingdom…
Those happened because of social engineering.
A human called a human, and talked them into doing something that they quickly regretted.
And AI can help you with that, but it can’t solve that problem for you.
DAVID. Yes, I’m not anti-AI.
So, to be really clear, I do think that you should use AI to help you through very practical things especially.
It could be very useful in analytics; it could be very useful in detection; it can be very useful even in configuration.
Is this firewall rule sane?
A perfectly reasonable thing to ask an AI that’s able to ingest text and analyze it at a rate far greater than any human.
So I’m not anti-AI, but yes, the basics are where people are failing.
And strategy is where people are failing.
And I’m not even beyond claiming that AI will get there one day, but I am going to claim that the point at which AI is setting your risk strategy and setting the strategy indeed for your business and what is acceptable for your business, you don’t really have much of a business at all.
What is your unique value if you’ve essentially delegated to AI the formation of strategy?
That doesn’t make a lot of sense in today’s world.
Delegating to AI the basics, totally.
But that’s not where a lot of companies are.
They’re still trying to implement MFA; they’re still trying to implement zero trust; they’re still trying to implement identity controls.
Identity might be the new perimeter, but AI loves weak passwords.
So did humans 10 years ago love weak passwords.
They’re just exploitable now because they work at a scale that we couldn’t possibly have mustered 10 years ago.
DUCK. And, ironically, we’ve put in all these controls for humans logging in, where you need a username and a password and a one-time code.
But once you put that in, the system issues you some kind of authentication token.
If that gets stolen because it’s trusted by automated systems, whether AI or not, throughout your network… if that leaks out, then you’re in just as much trouble because the imposter is just masquerading as you, based on the fact that you already went through the strict login process.
DAVID. Yes, identity is really a very good example, because identity bleeds into the real world.
It blends with characteristics of the individual that’s attempting to authenticate characteristics of their posture.
And then also, of course, the tokens necessary to authenticate, whether that’s MFA or a password or a passkey or whatever… all of these things are more critical to get right, but they haven’t changed.
It’s more critical that we have MFA than ever before.
It’s more critical that you have password hygiene than ever before.
Conditional access is helpful.
But these are all layers that you should have already been building in.
They haven’t changed their nature, and people haven’t changed the identifying characteristics that make them identifiable.
DUCK. So, David, if we can just look at another angle slightly in the whole AI story…
Do you want to say something about the various types of AI, or the nomenclature that you get?
Maybe you want to tell us something about what we mean, and what the differences are, between classifying AIs, generative AIs, and the new kid on the block, agentic AIs, which to my mind would be much easier to understand if they were called “agent AIs.”
Do you want to have a go at that?
DAVID. Yes, OK!
So let’s start with…
I’m going to steal a phrase from one of my extremely senior technical people…
DUCK. [LAUGHING] I thought you were going to say “James Joyce” for a moment.
DAVID. Yes, well, we came across this question from a customer the other day, and she said, “AI is not existing in some fourth dimension.”
It’s still based in our world, and that is absolutely the right way to think about it.
DUCK. Yes.
DAVID. Whether we are talking about machine learning, or some kind of generative AI, or some kind of agentic AI, they’re all of the world in which we live, and they’re all mimicking the world in which we live, and the things that we have fed into them.
So there’s not much new under the sun – it’s still an accelerant.
Of course, something like an LLM [large language model] is probably what’s making the news now, for the most part, or generative AI.
That’s the new kid on the block, along with agentic AI, which is essentially an AI that isn’t as bounded by the things that it knew a year ago when it was formed, because it’s capable of searching and capable of incorporating new information.
Those two are what people are really asking about when they say, “What do we do about AI?”
Those two are not only accessible and inexpensive at the moment, but trendy – and, I think, pragmatic when you’re developing code, or when you want to incorporate some kind of polymorphic feature in your malware.
DUCK. Right,. so that’s where instead of saying, “Here’s the code, I’m going to use this every time,” you say, “Make me code that does something along these lines.”
====img
DAVID. Yes.
DUCK. Particularly for an attacker, the AI doesn’t have to be perfect.
It just has to get it mostly right most of the time.
And then the idea is that the actual executable code, even if it’s script code, that you get will not just be a little bit different every time, like maybe having different variable names in it, or weird comments interspersed in it.
It will actually look like somebody else wrote it.
DAVID. Right.
DUCK. And therefore it will be harder to write a rule-based detection that would find it.
Do you think that’s the main value at the moment to attackers?
DAVID. I don’t know if that’s the main value, but that’s an accessible, practical value.
I don’t think that you saw this earlier.
So, categorizing AIs that are embedded in many of the tools that we’ve used now for decades – I mean, the technology is very old by computer standards…
That kind of AI is not sexy, because the things that it does aren’t novel.
They aren’t something that can break out of its own math and do an arbitrary new thing that is very flexible, a “general intelligence” type of characteristic.
DUCK. Hey, I made some art!
DAVID. Yes. [LAUGHS]
DUCK. Which sounds exciting, but actually some of those classificational AIs for deciding is something good or bad are actually quite amazing, given that we now have much more compute power and much more memory available, to balance much bigger sets of good versus bad.
DAVID. Yes.
DUCK. But as you say, it’s not sexy because you don’t get, “Hey, a computer wrote this poem” at the end.
You just get, “I don’t think you should click that link.”
DAVID. Right.
DUCK. Which is probably more important in your economic life.
DAVID. Well, it is, but it requires also more intentional design.
If it’s a category engine, you had to know something about what you wanted to categorize from the beginning.
DUCK. Yes, I get what you’re saying.
If you’re just generating a poem, it doesn’t matter whether people think it’s good, or bad, or rubbish, or literary, or not.
It still kind of seems quite cool.
But when you’re doing a classification, people expect you to get it right.
And if it’s a “yes” or “no” answer, then as you say, it’s not very exciting.
DAVID. No.
And your classification engine isn’t going to pivot from writing a poem into writing some code.
And ChatGPT can absolutely do that.
And when you get into the agentic work, it can not only do that, but it can also do that unbounded by the last time it was trained, to some extent.
It can actually go out and perform actions on your behalf.
DUCK. Do you not agree that the general way in which the term “agentic AI” is used, say by analysts and in media reports, doesn’t reflect that technological side, which is really where AI started with Shakey the Robot back in 1969, 1970?
They talked about “software agents” capable of making independent decisions.
But really, what it means is we’re using AI and we’re more inclined to trust the answers, so we’re not just letting it produce a list that humans can examine.
We’re actually letting it make decisions: run or block; allow or don’t allow.
That sounds exciting, even though the mechanics are largely the same.
DAVID. Yes, the Industrial Revolution itself also did not invent new tasks.
Necessarily, it basically made them more accessible, some of the same old tasks.
So this is not magic in any way – it’s just making those old tasks faster and forcing everybody to adapt.
In the case of something like an agentic AI, it might make an individual favor tasks that were previously onerous.
And that’s really essentially what we’re seeing with AI – it’s the same thing.
There are activities that were previously not linked to certain outcomes, because of the difficulty of performing that activity at scale.
And it’s no longer difficult.
DUCK. So, David, do you think that from a media facing, or from a sort-of computer user or network administrator fear point of view, we’re reaching a point in the AI sphere that we did perhaps five or six years ago with ransomware?
Where people are so scared about it, even though it’s just one of many forms of malware out there that you should be worried about, that they start looking for tools and services that specifically deal with AI-based threats in the hope that if they can conquer that, the rest doesn’t matter.
Because if so, that sounds like a very deadly combination, does it not?
DAVID. Yes, we’re in that same kind of cycle, that hype cycle.
I will grant that AI is a more general threat, where ransomware was boring.
It was very, very primitive, in my opinion.
And the ways in which it was getting into these hospitals, and law firms, and places that were frequently assaulted?
They were not interesting; they were not sophisticated attacks; they were essentially the exploitation of poor discipline and poor risk management on the part of the victim.
AI has the same hype cycle going on, which is to say that it isn’t something necessarily novel to the world in which we live, but it’s an accelerant.
And I do think that because it’s more general, almost anything, defense, offense… anything at all can be accelerated by AI, potentially.
I think it has a lot more surface area, and that does make it more interesting.
But the way in which it’s compelling to me remains the same.
Get back to basics.
Think about the ways in which you’re layering your defense.
The rulebook is the same… but worry that you’ve been cutting corners, don’t worry about the fact that the world is getting a little faster.
Humans are still absolutely important in almost any given venture, whether that’s offense or defense, but their efforts are what will be amplified.
It’s like giving everybody power tools.
AI is a tool that will amplify the efforts of someone who can use that tool, but at this point, it isn’t going to innately run away on its own and develop something coming from another world.
I would actually take it a step further than that and say that if you’re looking for tools, you’re missing the point.
If you’re waiting for some AI defense product, let’s say, which is super-sexy and overpriced, to save you, you’re missing the fundamental point.
DUCK. Well, it might not be sexy, but I bet you it’ll be overpriced, David. [LAUGHTER]
DAVID. Well, it will definitely be overpriced.
It’ll have mostly a marketing budget, yes.
I mean, the right strategy remains: layered security; hygiene; discipline.
You could have said that about city walls and defenses in the medieval era, right?
Like, this is not new.
The right strategy was always to do basic things well.
The tools?
They’re helpful, but they were never the point.
They were never unto themselves the thing that was going to save you.
So stop buying tools!
Start buying programs, or start buying, in the form of personnel and effort, a *strategy*.
DUCK. Yes, there’s not much point, as I think you’ve said many times, in putting the anti-aircraft battery on the sixteenth floor, or going around and putting super-secure locks on every window in the building, if you don’t worry about the front door.
DAVID. Yes.
DUCK. Or if your fancy window locks don’t work on every type of window that you’ve got and you just hope the crooks never notice… because they jolly well will!
DAVID. Yes.
That is the building that almost everybody runs when it comes to infrastructure! [LAUGHS]
DUCK. So, rather than having a lot of protection for part of the threat, it’s much better to have human-friendly, human-centric protection that helps you across the board.
And then help your humans be good at dealing with the bits that the automatic tools don’t yet handle.
DAVID. And keep your designs themselves negotiable.
We use this metaphor of the 16-floor building.
Well, why does it have 16 floors if we’re only occupying floor three?
So, let’s just knock that thing down a bit.
There are many situations where I see overgrowth, and overgrowth leads to management burden.
Management burden leads to misaligned discipline and capability.
In other words, the more you have, the more problems you have.
That is an architectural problem.
And furthermore, it’s oftentimes a project management problem, or an engineering problem of the will to fix that.
To not be proposing how we write better firewall rules, but to be proposing that we entirely eliminate segments of a network that are not used and represent nothing but burden.
It’s really unusual to see a company that wants to simplify operations, and is willing to invest the time to do so for the purposes of cybersecurity.
But it can be one of the most effective things you can do because it isn’t going to cost you anything except time and strategy and risk analysis.
DUCK. And in the end, you may actually save enormous amounts of money, because you probably won’t need anywhere near as many servers.
You won’t need anywhere near as much cloud bandwidth or cloud storage, and you won’t have as many potential security holes that you have to worry about.
You’ll be able to focus on what you need to do what you want.
DAVID. Yes, that’s exactly how it works.
And it just is so rarely seen.
Everyone wants to double down on “Tools, tools, tools.”
In the car world, Colin Chapman got this right.
This is why Lotus is a cool car.
Does it have the most power?
No.
Is it the most extremely over-engineered?
No, not at all.
What they did was they realized that weight is the culprit, and if you reduce weight to the bare minimum, you can have actually a very ordinary, rudimentary engine with ordinary output.
You can have a steering rack that doesn’t look like it came out of a spaceship, because all of this stuff falls into place.
When you design for something to be simpler, the systems that make it run well can also be simpler for the same effective performance.
It’s a paradigm that is oftentimes ignored by people that love tools.
DUCK. [LAUGHING] Will it rust before you get it off the forecourt?
Yes!
Will it leak oil on your driveway?
Of course it will!
DAVID. [LAUGHS] But it was engineered well!
DUCK. But will you enjoy every moment you spend driving it?
You bet! [LAUGHTER]
DAVID. Yes.
DUCK. Maybe that’s not a perfect analogy, but I guess it does remind us that sometimes, “less is more,” as corny as that sounds.
And, very certainly, “more is often too much.”
Because you get to a point that the only way you can handle that is, as the SolCyber website says, “More tools, more tools,” which is a kind of never-ending, chase-your-tail, unwinnable proposition.
DAVID. Yes, it’s expensive.
And some places get away with it because they truly have the budget or the scale.
But take a step back, especially if you’re in a resource constrained environment.
Take a step back and think about: what your business is; what processes support that business; and what systems need to support those processes.
And prune the things that aren’t involved in your inventory.
Prune them ruthlessly.
That will benefit you in many, many ways, because it will essentially make your environment more accessibly run.
It’ll reduce the surface area of attack; it’ll reduce the number of things you have to maintain.
And you probably won’t notice a functional difference.
DUCK. So David, I’m conscious of time, so to summarize.
Do you think it’s fair to say that when it comes to AI and AI threats, they’re definitely something that you should understand, and that you should be concerned about…
…but you should not allow them to distract you from all the other things that you can and should be doing.
As you say, they’re not the space monster from the fourth dimension.
They’re actually part of the current threat dimension, and even if they were all removed, you’d still have plenty to worry about.
So why not take a holistic view instead of just focusing on the shiny things that the media is most interested in?
Would you agree with that?
DAVID. Absolutely.
One hundred percent.
You should not be panicking about AI.
I said it before: Don’t panic about AI; panic if you’ve been cutting corners.
I think that’s basically my message for every person that’s asked me this question.
“What is it you weren’t doing that you ought to have been doing?”
Because AI will make those things exploitable more readily, but they were always vulnerabilities.
DUCK. So I guess to be Douglas Adamsian about it, “Don’t panic.”
Make sure that you don’t get distracted by the shiny new stuff in the media!
David, thanks to you for your informed insights.
Thanks to everybody who tuned in and listened.
Please subscribe to this podcast if you haven’t already, so you know when each new episode drops.
Please like us and share us on social media, and recommend us to your friends, because getting listens is valuable to us.
And, as always…
Until next time, stay secure.
DAVID. Bye, everyone.
Catch up now, or subscribe to find out about new episodes as soon as they come out. Find us on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.
Learn more about our mobile security solution that goes beyond traditional MDM (mobile device management) software, and offers active on-device protection that’s more like the EDR (endpoint detection and response) tools you are used to on laptops, desktops and servers:
By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.