Home
Blog
Tales from the SOC: AI in cybersecurity – Bane or benefit? | S1 Ep012

Tales from the SOC: AI in cybersecurity – Bane or benefit? | S1 Ep012

Paul Ducklin
05/21/2025
Share this article:

LISTEN NOW

Join Paul Ducklin and SolCyber CTO David Emerson as they talk about AI and cybersecurity in TALES FROM THE SOC.

In this episode: Why does it sometimes sound as though no one ever thought of using AI in cybersecurity until now, even though it’s been an important part of the industry for years?

Is the endless hype helping or hindering our fight against cybercrime?

Tales from the SOC: AI in cybersecurity - Bane or benefit? | S1 Ep012 - SolCyber

If the media player above doesn’t work in your browser,
try clicking here to listen in a new browser tab.


LISTEN IN YOUR FAVORITE APP

Find TALES FROM THE SOC on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.

Or download this episode as an MP3 file and listen offline in any audio or video player.


READ THE TRANSCRIPT

[FX: PHONE DIALS]

[FX: PHONE RINGS, PICKS UP]

ETHEREAL VOICE. Hello, caller.

Get ready for TALES FROM THE SOC.

[FX: DRAMATIC CHORD]


DUCK. Hello, everybody.

Welcome back to Tales From the SOC.

I am Paul Ducklin, and I’m joined by David Emerson, CTO and Head of Operations at SolCyber.

Hello, David.


DAVID. Hey there!


DUCK. David, we’ve got a partly controversial topic, but I don’t think the answers are particularly controversial.

But they do need considerable thought, because of all the marketing guff that’s going on around this at the moment.

And that title is: AI in cybersecurity – Bane or benefit?

Is AI’s use really as novel and as exciting in cybersecurity as some of the new marketing-led startups seem to be suggesting?


DAVID. I don’t consider this answer controversial, inasmuch as I consider AI extremely useful.

And it has been, and it is actually ordinary in our industry, because of the necessity to analyze so much data.

So I don’t consider that controversial at all.

What I consider controversial is the inherent laziness in relying on something that is part of the background in your industry as your business strategy.

If the best thing you have to say about your cybersecurity startup is that it uses AI, you probably need to try a little harder to deliver some value in the year 2025.


DUCK. In other words, it’s almost one of those things that you shouldn’t need to say?

It’s a little bit like a new kid on the automotive block who’s offering a car for sale saying, “Our cars come with wheels, and those wheels are fitted with tires before delivery.”

It sounds dramatic, but nobody else would feel that worth saying, would they?


DAVID. It’s a lot like that.

Some people might make the point that AI has materially changed in the last five to ten years, and I think that that is accurate also.

Both those things can be true.

AI has become more useful in some ways to people who are not as specialized.

The prevalence of the large language model (LLM) now has enabled a lot of other roles to use AI, but at the end of the day, AI is still merely a tool.

It’s still something used to facilitate your business.

It cannot be your whole business model, and in many cases, not only will it not provide you with a good business model and a good strategy, but it still requires humans to use that tool.

Absent the human, absent the human strategy, AI at the moment is simply not an inherently good business.


DUCK. In other words, it’s a little bit like, say, spreadsheet applications, which started 40 years ago with VisiCalc, and a very, very basic recalculation system.

And then they’ve got more and more and more features, so they can draw graphs for you; they can do regressions for you; you can do complicated things like pivot tables that weren’t possible 20 years ago.

You still wouldn’t replace your CFO, and your Chief Marketing Officer, and the inventive people in your business with a spreadsheet program, would you?


DAVID. Precisely.

There are numerous things today that we take for granted that a spreadsheet does that 60 years ago would have actually been a full-time job.


DUCK. Yes!


DAVID. And that’s just what we’re seeing with AI.

Done properly, AI is the sort of thing, or let’s say generative AI, large language models, things like that, are the sort of thing that maybe 15 years ago would have been a team of 30 people to validate the year-end numbers for a large pharmaceutical organization or something, right?

Think of something that might have been highly manual that really would have required a lot of review.

And nowadays, in 2025, you can feed that into an LLM, and you can get reasonable sort-of first-pass results on what stands out, and what you should highlight, and what’s different from last year, and it’s powerful.

It’s really powerful.

Those 30 people can go do something else now, can go do a higher-order task.

But you haven’t ultimately allowed the AI to run the business.

In the context of cybersecurity, I think it really is a proxy for doing development on an earnest product.

I think it’s a marketing proxy to drive investment.

I think in the context of hiring, it’s a proxy for skills.

You see a lot of people out there harping that they have AI in their skill set…

I’m not really sure what that means.

I have a calculator on my desk; I use it for a lot of things.

But at the end of the day, the decisions that I make on the basis of those calculations are mine.

And I really think that businesses need to keep that in mind.


DUCK. Yes, a calculator makes you proficient at arithmetic, even if you struggled or could only do it slowly in the past.

But it does not a mathematician make.


DAVID. Right!


DUCK. So David, you touched on one of the things that has changed notably in the last, let’s say, five years in AI, compared to the kind of tools and techniques that cybersecurity companies were using in the past.

Generative AI, or Gen AI, as it’s usually called.

And that is a little bit different, isn’t it?

You’re not just using it to consume data that you’ve collected.

You’re asking it to reach conclusions and to write them up on your behalf.

So how well do you think that is panning out in the cybersecurity industry?


DAVID. I haven’t seen that deliver real value yet.

Used poorly, the thing that generative AI becomes is an obligation to the consumer.


DUCK. I agree.


DAVID. The producer, that is the person who is able to write a prompt and compel generative AI to produce something…

The producer then gives that material to someone who is apparently obligated to consume it.

And I believe there is evil in that; I think there is waste in that.

There’s almost nothing that a generative AI can say that a human couldn’t have said more succinctly, given the richness of context and experience that a human in that field might have.

So, the concept of using a generative AI to, for example, write analyst notes is really playing into nothing more than the compulsion that some people have to see words on a page.


DUCK. Yes, I agree with that.

Because you do see companies that are offering a SOC that will “never run out of staff,” because it’s populated by AI, showcasing work that their AI has done.

And it seems, in many cases, little more than taking a traditional malware report, for example, “Virus X found in file Y,” and padding it out into a whole long page of verbiage.

Prose that sounds exciting and marketing-important just seems to be making work that is wasting everybody’s time.


DAVID. It can be.

And it’s a dereliction of duty among management to implement AI in that irresponsible way.

It is wasteful of not only internal time, but also your consumer’s time, right… the customer who has to read that.

It can be managed well, however; there are instances of this being managed well.

I’ve even seen products that I think do it pretty well.

There are a number of SOAR products, for example, security automation products, that effectively use AI in order to provide things like next actions to a human analyst who is going to guide the AI and their own analysis through some kind of canned process that is branching and open-ended enough that you could potentially analyze a phishing email.

That sort of suggestion engine, I think, can be very effective.

It can be as effective as having a tickler list or a memory aid with you when you’re doing any given task.

So there are effective uses of it, but one of them is not, “Write me 500 words about this incident.”

Because that 500-word thing?

You lost the efficiency battle as soon as you asked it to write 500 words.


DUCK. [LAUGHS]


DAVID. It’s going to start writing 500 words, whether there needed to be 500 words said about that incident or not.


DUCK. [LAUGHING] I’m sure everybody can remember that time, when they were in about middle school, when they were faced in the history lesson with, “Write an essay of 600 words on some topic.”

And you think, “How can I pad out this paragraph with 50 words? How can I get to the limit? I’ve got to write three pages!”


DAVID. You’re dredging up the stuff of nightmares for sure. [LAUGHTER]


DUCK. Being able to write an essay of 600 words could be a bane, because it sounds like something that should benefit cybercrooks.

“Produce me a phishing email, and avoid all the grammatical stupidities and the spelling mistakes that I used to struggle with in the past.”

What do you say to that?


DAVID. Yes, you see that.

Phishing emails are better than ever before now, because they are capable of using grammatically correct English.

AI is a very good emulator of human behavior, and a very good reader of expectation.

I think that crafting a phishing email at this point in time is probably easier than ever… a compelling phishing email.

There are some things that haven’t changed, though.

Ultimately, the goals of a phishing email are different than the goals of a legitimate email.

And so, things such as URL inspection, well, that hasn’t changed that much.

AI has not allowed somebody to register a domain that they previously could not register.

At the end of the day, there are characteristics of phishing emails that haven’t changed since spam became a thing.

Yes, I guess what we’re seeing now is the hazard of phishing emails being grammatically correct, because that has occurred now.

What we aren’t really seeing is phishing emails drastically change their nature.

Their nature remains a scam with a different motive than a legitimate email, and so it’s detectable by other means.


DUCK. Absolutely.

We just need to throw away the simplistic rules that used to be reasonably effective and easy to remember and somewhat helpful…

…and replace them with a different way of thinking about the problem.


DAVID. A way that would have been just as valid 10 years ago, when there were more misspellings in the phishing emails.

For example, we use a sandbox occasionally to detonate payloads that are ambiguous.

If we have payloads of ambiguous maliciousness, we will toss them into a sandbox, and we’ll execute them, and just trace the processes that spawn and see what they do.

Well, that hasn’t changed, and it doesn’t really matter how good the email is nowadays.

If the payload does malicious things when you execute it, I guess it’s a phishing payload!

I really don’t think that it necessarily has changed its nature.

I think computationally, 10 or 15 years ago, we didn’t really have the resources to get that kind of a sandbox, but we do now because hardware is cheap, and time on hardware is cheap, and that’s just kind of where we’re at in 2025.


DUCK. It’s almost as though the fact that spammers now write grammatically correct emails could be seen as an opportunity for us simply to find more intelligent ways, whether they’re human ways or technological ways, of looking for badness.

And it reminds me (I’m over-simplifying, I think, a bit) of a comment that I saw online from Neil deGrasse Tyson, who was asked what he thought about the use of AI by students at universities for cheating.

Instead of talking about how you might detect that the students were AI, he said, “Well, shouldn’t we take this as an opportunity to revise this ancient system of examinations that we’ve had for all these years?”

And actually find another way to determine whether somebody is contributing to science, or art, or whatever it is they’re doing when they’re studying for a degree?

I guess that’s where we need to be going in cybersecurity, isn’t it?

Building our human-led defenses…


DAVID. The world of education never was entirely about the resources internal to the student.

The resourcefulness of the student, the resources around the student, the synthesis of their environment, and their individual capabilities always was what ultimately determined their performance.

And AI is just another fixture of that environment today.

There are people who are academically very talented in a vacuum, and there are people who are academically potentially less talented in a vacuum and more capable of utilizing the resources or the analogies or the synthetic content around them.

And this was true before AI as well.

Richard Feynman is an excellent example, if you’ve ever read any of his works.

A lot of his works are essentially practical syntheses of complicated things, very similar in the tradition to Neil deGrasse Tyson.

This is a person who was academically extremely accomplished, but at the end of the day his method was not one of quiet, ivory-tower study.

It was one of synthesizing the real world in a way that he understood it by analogy, and that others could as well.


DUCK. Even though he was a famous Nobel Prize winner, the thing that he enjoyed doing most of all, and found most important in his academic life, was teaching.


DAVID. Absolutely.


DUCK. Having said that, David, what do you have to say to companies that come out and say, “Hey, you don’t need people at all.”

“We’ve got this AI SOC; you just plug your systems into it and it will tell you what to do.”

How well is that going to turn out?


DAVID. Well, good luck.


DUCK. [LAUGHS]


DAVID. I don’t look forward to competing with their pricing model, because they’ll probably be dirt cheap.

But I look forward to taking their customers when their customers realize it doesn’t work, and they go looking for something else.


DUCK. But will they necessarily be dirt cheap?

To do AI on any large scale is not entirely inexpensive, is it?

You get the idea, “Oh, well, you just throw some computers at it,” but sometimes you have to throw a very large number of computers at it, and that’s part of the reason why generative AI is now possible.


DAVID. Just as the responsible approach in considering how you might modify curricula… the responsible approach for a business, in the face of the competition saying that they use AI, is also to use AI, as we do at SolCyber, for example.

But change your plans to change what it means to be a contributor at your company.

We don’t have a lot of level-one personalities in our company, and that was intentional from the beginning.

It wasn’t entirely informed by AI, but it was informed by this nascent notion of AI.

It was informed by the way we wanted customers to experience our Security Operations Center when they called in.

Not a lot of escalations, but rather a lot of first-touch resolves.

And ultimately, it also means that we hired fewer people.

We didn’t need a call center to sit in front of our level twos, to sit in front of our level threes.

Instead, when you call (if you call, because it’s 2025 and not a lot of people actually pick up the phone and call), you get somebody who is fully actualized to solve your problem.

And that is a product of the AI world.

You don’t get a chatbot, you don’t get a synthesized voice, and you don’t get a human that’s paid so little that they’re competing in wages with the cost of electricity.

Instead, you get somebody who is talented, backed up by AI if they need it, and capable of solving your problem.

So I think that’s a personally responsible, moderate way to solve not only the integration of AI, which is beneficial, but also to avoid some of the pernicious effects of AI.

Which include: poor service at the low end; the feeling that you’re working for a company where you’re essentially managing their chatbot for them now; and a tar-pit on the way to actual skills and actualization within the company.


DUCK. No one put it better than Blaise Pascal, sometime in the middle of the 17th century.. well, I’ve heard this quote attributed to any number of people, everyone from Cicero to Samuel Clemens, better known as Mark Twain.

That is, “Please forgive the long letter; I did not have time to write a shorter one.”


DAVID. [LAUGHING] Words are cheap, and getting cheaper.

They’ve always been cheap, and now they’re dirt cheap, as long as you don’t consider the water cost of running an AI.

But they are absolutely dirt cheap at this point.

What’s expensive is making each of those words in that sentence pull its weight, and that’s an art, and that’s something that AI is not doing right now.


DUCK. So, you’re absolutely not against AI.

In fact, you find it very useful, and you might say that it’s an important underpinning of SolCyber.

But it’s not something that you feel the need to mention on the SolCyber website.

When you go to https://solcyber.com, it doesn’t say, “Hey, AI driven security.”

That’s taken for granted.

You use the prevailing tools that give you the effectiveness you need.

Instead, the website focuses on the human side, and having a human-led service that ennobles, if you like, and enables the humans at the company that you’re trying to defend.


DAVID. You’re absolutely correct.

AI will appear on our website about the same time the calculator that I use on my desk appears on our website.

It’s irrelevant.

It’s entirely irrelevant, but it is absolutely used in the delivery of our services.


DUCK. [LAUGHS] I wouldn’t mind seeing your calculator on the SolCyber website.

Maybe I can sneak it into a blog article, because it’s an HP-42S, isn’t it, or a modern emulator of it?

Tales from the SOC: AI in cybersecurity - Bane or benefit? | S1 Ep012 - SolCyber


DAVID. [LAUGHING] It’s a clone.

It’s a SwissMicros DM42n.

Yes, it’s pretty cool.

Bringing it back to cybersecurity…

Lest anyone think that we’re down on AI, there are things that it is excellent at.

As an example, it is tireless.

Like any machine, AI will sit there and review your code until it has read every single line.

It’s really great at security reviews for code; we use it for that.

It will tirelessly seek out the quality issues of your code; the formatting issues of your code; the execution and runtime issues.

It will advise you of changes that you need to make.

It’s just really good at that, and that is nobody’s job today.

Why would anyone want to do that to themselves?

Instead, they could be designing better applications.

They could be designing better integrations with the third-party tools that we integrate into one platform at SolCyber.

So, I definitely don’t want people to get the impression that we’re down on AI’s utilization, or its utility in the modern enterprise.

It’s super, super handy.

But it’s just a tool.

And at the end of the day, this is a human business, a human business that, when it needs, relies on AI.

[SOMBER] I believe quality content has started to kill the internet; it is just an absolute scourge.

And by “quality content,” I mean AI-generated “quality content” in giant air-quotes that won’t possibly come through large enough in the podcast.


DUCK. [LAUGHING] I’ll get them in the transcript… I’ll make a special graphic.

Tales from the SOC: AI in cybersecurity - Bane or benefit? | S1 Ep012 - SolCyber


DAVID. That’s what distinguishes your work from the average AI-generated content.

It’s actually really fun to read something written by a human with an intent, with an opinion, with a structure to it.


DUCK. One particularly egregious risk of AI in cybersecurity, when it’s used in a sort-of closed-loop automation, is the propensity to inculcate errors.

For example, because the first person is detecting it on Virus Total, so three more start detecting it.

So, nine more start detecting it, and suddenly everybody’s detecting it, and therefore anyone who goes to Virus Total thinks, “Oh, it must be malware, or there wouldn’t be such consensus.”

But they’re not looking at consensus!

It feels like an intellectual cop-out, and it certainly feels intellectually unfair, doesn’t it?


DAVID. It could be; it is certainly a hazard.

I think it can be designed around.

But if you’re already committed to the lowest common denominator of effort, which is to say you’re going to sell your business on the sales pitch of AI with no actual substance in your business…

…then you’re probably not committed to properly filtering the output of that work, either.

I think that the hazard is that the two are often coupled in this market, where you can attract investment using the word “AI.”

That is to say, laziness and AI can be coupled together in a way that ultimately then besmirches the name of what could be a perfectly useful tool in the right hands.


DUCK. So AI is not just a question of “more tools, more tools,” like yet more cybersecurity agents doing the same sort of thing.

It’s very definitely a benefit in cybersecurity, isn’t it?

But there are some “bane” parts of it, and as a community, as a society, as cybersecurity experts, it’s our duty to push back against those, to make sure that AI becomes a benefit for everybody.


DAVID. Yes, I totally agree with that.

It should be used everywhere appropriately.


DUCK. David, I think that’s a very short and direct way to finish up.

You have a very dispassionate way of looking at this: “There are great bits, and there are bad bits; our duty is simply to get rid of the bad bits and use it for good.”

So thank you so much for your time.

Thank you to everybody who tuned in and listened.

If you want to read some of the human-generated content that I think you will enjoy greatly, don’t forget to head to https://solcyber.com/blog.

Thanks for listening, everybody.

And, until next time, stay secure.


DAVID. Bye, everyone!


Catch up now, or subscribe to find out about new episodes as soon as they come out. Find us on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.


Learn more about our mobile security solution that goes beyond traditional MDM (mobile device management) software, and offers active on-device protection that’s more like the EDR (endpoint detection and response) tools you are used to on laptops, desktops and servers:

Tales from the SOC: AI in cybersecurity - Bane or benefit? | S1 Ep012 - SolCyber

Paul Ducklin
Paul Ducklin
05/21/2025
Share this article:

Table of contents:

The world doesn’t need another traditional MSSP 
or MDR or XDR.

What it requires is practicality and reason.

Related articles

Choose identity-first managed security.

We start with identity and end with transparency — protecting where attacks begin and keeping you informed, with as much visibility as you want. No black boxes, just clear, expert-driven security.
No more paying for useless bells and whistles.
No more time wasted on endless security alerts.
No more juggling multiple technologies and contracts.

Follow us!

Subscribe

Join our newsletter to stay up to date on features and releases.

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.

CONTACT
©
2025
SolCyber. All rights reserved
|
Made with
by
Jason Pittock

I am interested in
SolCyber XDR++™

I am interested in
SolCyber MDR++™

I am interested in
SolCyber Extended Coverage™

I am interested in
SolCyber Foundational Coverage™

I am interested in a
Free Demo

11816