Home
Blog
Tales from the SOC: Taming the ransomware beast | S1 Ep013

Tales from the SOC: Taming the ransomware beast | S1 Ep013

Paul Ducklin
06/26/2025
Share this article:

LISTEN NOW

Join Paul Ducklin and SolCyber CTO David Emerson as they talk about techniques for taming ransomware in TALES FROM THE SOC.

In this episode: Will we ever conquer the scourge of ransomware? Can rules and regulations help? Should we be allowed to pay off cybercriminals at all?

Tales from the SOC: Taming the ransomware beast | S1 Ep013 - SolCyber

If the media player above doesn’t work in your browser,
try clicking here to listen in a new browser tab.


LISTEN IN YOUR FAVORITE APP

Find TALES FROM THE SOC on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app. Or download this episode as an MP3 file and listen offline in any audio or video player.


READ THE TRANSCRIPT

[FX: PHONE DIALS]

[FX: PHONE RINGS, PICKS UP]

ETHEREAL VOICE. Hello, caller.

Get ready for TALES FROM THE SOC.

[FX: DRAMATIC CHORD]


DUCK. Hello, everybody.

Welcome back to TALES FROM THE SOC.

I am Paul Ducklin, joined as usual by David Emerson, CTO and Head of Operations at SolCyber.

Hi there, David.


DAVID. How’s it going?


DUCK. Very well, apart from my stolen bicycle. [ANGUISHED LAUGH]

David, today’s topic, very simply put, is: Ransomware.

And, in particular, just how far can you actually trust a cybercriminal?

So, let’s start off by looking at one country’s regulatory response to how we deal with the human side of ransomware, and that is recently-passed Australian rules that declare, “Thou shalt disclose any payments.”

What do you think of that?

Tales from the SOC: Taming the ransomware beast | S1 Ep013 - SolCyber


DAVID. I love them.

I think they’re a wonderful benchmark for some of the regulation that could be pragmatic in the US.

There’s something coming in the UK as well.

I really think that this sort of regulation is the best path forward for an industry… by that, I guess I mean a victim class, a victim class that needs clarity on what it is they can do.

I don’t mean to imply that payments are ideal.

You should not be paying ransom.


DUCK. Absolutely!

We discussed that in a previous podcast, didn’t we?


DAVID. Right, yes.


DUCK. “It shouldn’t be your first choice,” I think you said, “but it shouldn’t be your second, third, fourth, fifth, or sixth either.”

But banning the payments outright, that would maybe be a step too far, because it could cause a business that might otherwise just limp through and recover to go bust entirely and everybody loses their jobs.

And so that would be a step too far, probably, wouldn’t it?


DAVID. It would be.

And I also believe it could cause a situation where you’re driving underground all of this black-market activity in payments, in ransoms especially.


DUCK. Absolutely.


DAVID. You really cannot afford to create an aversive environment if you want to have information about a crime that you’re trying to solve.


DUCK. Yes, you’re quite right, because otherwise, companies would just say, “Well, we didn’t pay. We went to a ‘data recovery’ company.”

And they just happened to be based in some other jurisdiction.

And, lo and behold, they got the data back.


DAVID. Yes.


DUCK. So I think here it’s just clarity, isn’t it?

It’s saying, “You can pay. You shouldn’t, but if you have to, then you have to tell us.”


DAVID. Yes.

And they even lay out what that reporting needs to look like.

All of those criteria are very reasonable.

They need the contact business, and the entity that’s paying, and who exactly is exchanging money.

Is it someone on your behalf?

Is it you, is it your retainer with some kind of a ransom insurance?

They need to know who exactly demanded what from you.

Details about the cyberincident, of course, and communications about the cyberincident.

And then they also lay out, quite reasonably, who’s subject to this: Critical industries of any size, and companies over $3 million in revenue.

So it is not a bakery, necessarily; it is not necessarily any business at all, except the entities that really would be major targets and that we absolutely should have an analytics around.

That would be things like publicly traded companies, larger corporations, and critically-situated corporations.

So, utilities, power companies, whatever – anything that would be a critical industry.

So I think it’s very practical, honestly.


DUCK. And I think what I like about these regulations is that although they use the word ‘ransomware’ in the general way that we understand ransomware these days, where sometimes you’re paying for a positive (i.e. the decryption key that will get your computers running again), and sometimes, or as well, you’re paying for a negative; you’re paying a blackmail; you’re paying hush money that they won’t reveal the data, or harrass the people whose data they’ve stolen…

…it doesn’t matter whether it’s either or both of those under the Australian rules.

There doesn’t even have to have been any malware involved.

It’s just that if somebody is using a cybercriminal intrusion as a way of squeezing you for money, and you pay them, that counts.


DAVID. Really, the only thing that I don’t like about this is that they have a fee associated with not reporting… I don’t remember the exact number.

Actually, it was a weird number: I looked it up; it’s something like AU$20,000.

I don’t personally think that’s enough.

I have seen a financial institution pay $10,000 just not to evacuate their trading floor during a fire drill.

So you really have to keep in mind that some of these businesses very much would be willing to pay AU$20,000 to not have to report something.

I think that sounds a little bit toothless, maybe.

But that’s OK.

I mean, of all the possible problems with the regulation, that’s one that is fixable in the future, so this is a largely practical framework otherwise.

I think that you’re going to see some really interesting prosecutions potentially come out of this acutely, because the law as I read it also did not have a term of limitation.

In other words, it is for past incidents as well.

So, you may even find that details are disclosed which enable law enforcement to make acute gains in the prosecution of these sorts of crimes.

Details like, “Oh, in the past, our hospital wired money to this bank account. Here are the wire details.”

That could be really helpful, potentially.

If nothing else, it’s a chain of evidence that could potentially be significantly enriched by this sort of regulation, just in terms of historical incidents.


DUCK. So, having said that, David, what do you think about a follow-up article that we published on https:/solcyber.com/blog?

This is a report that researchers at infosec company Trellix dug into, surrounding the notorious/infamous LockBit ransomware crew, which basically revealed just how untrustworthy cybercriminals can be.

Tales from the SOC: Taming the ransomware beast | S1 Ep013 - SolCyber

Not only that sometimes when you pay for the positive outcome, namely a decryptor or a decryption key, it might not work properly or even at all, but also that sometimes when you pay hush money, the silence that you pay for is not guaranteed.

What do you make of that?


DAVID. [STRAIGHT-FACED] I’m shocked that criminals are not reliable.

[LOUD LAUGHTER]


DUCK. Sorry… I walked into that one, didn’t I?

I realized at the end that I was asking a rhetorical question, but if you chose to answer it literally, I might look like a bit of a buffoon.

[LAUGHTER]

So I take that on the chin.

Yes, “Who would have thought,” eh?


DAVID. Yes!


DUCK. That you can’t trust people whose initial goal was to screw you over enormously.


DAVID. It may depend who you’re dealing with.

I don’t think the Trellix report really goes into sufficient detail for us to ascertain whether major players versus minor players, individuals working for themselves effectively, versus groups of people…

…I don’t think there is enough data to know who would be reliable.

Because there probably are, in fact, there definitely are, just by the numbers in Trellix alone, there definitely are reliable players in this criminal enterprise.


DUCK. Indeed.

In fact, if you go right back to the modern ransomware scourge…

Not back to the AIDS Information Trojan of 1989 that required a banker’s draft that did not work out, but I’m thinking of CryptoLocker [in 2013], which was one computer at a time, with a $300 unlock fee.

Those guys did get a reputation that, although they were crooks, if you paid up, you’d probably get a decryption program with a working key pretty quickly.


DAVID. Well, I think that some of them are in it for a longer run than others.


DUCK. Indeed.


DAVID. And if you’re in it for a longer run, you have to have a ‘product.’

Which is, I guess, “Your data back.”

If you’re in it for one time, and you want to make a little bit of money one time, yes, you don’t even need to give the key.

It doesn’t matter.

But I do think that that’s going to be a variable outcome.

I also think it’s going to be really hard for a company to tell the difference.

Which is yet another reason to not pay!

[LAUGHS]


DUCK. Yes, exactly!

Even if you think you “trust” them, they might have screwed up.


DAVID. Yes.


DUCK. Or they might have been screwed over by one of the affiliates that most of these ransomware gangs now work with.

The crooks have turned it into a human-oriented crime where they’ve got “a franchise.”


DAVID. Yes.


DUCK. And we know that even the gangs that are known to have done some ransoms, asked and paid, of seven digits US…

…also work with affiliates who go for thousands, $5000 at a time.

So, like you say, you don’t necessarily know who you’re dealing with.

Many of them historically have shown that they have very poor operational security themselves.

That means, “Who knows who else has got your data?”

Even if you’ve paid them the hush money, and even if their intention was not to let it fall into anyone else’s hands.


DAVID. Yes!

So this really is yet another reason to make payment your final option.

And a final option behind numerous other options, not just the second thing you do when you’ve failed to have hygiene at the most basic standard.

It’s a good time to remind people: [digital] hygiene matters.

These things are preventable.

And most importantly, most of all, these things are typically not sophisticated.

Their targeting may be sophisticated; even, to some extent, the actual code that is executed could be somewhat sophisticated.

But, at the end of the day, this is a crime of opportunity.

And a crime typically perpetrated in environments that are of low operational hygiene.


DUCK. Indeed, yes.


DAVID. You aren’t vulnerable to this if you aren’t worried about the data on that particular machine that was encrypted, right?

And people just act as though there was nothing that they could have done.

But there are numerous things, including backups; including phishing training; including managing admin credentials and admin privileges throughout the environment.

I mean, there are so many things that lead to this extremely basic failure that aren’t something sophisticated at a nation-state level, like a zero-day attack.

That’s not what this is.

This is just a very simple, brutal crime.

And payment should be your last option, behind a whole bunch of other quotidian things you could have done to stop it from happening.


DUCK. The history of ransomware criminality… although the amount that the crooks brought in last year by some accounts (like that of Chainalysis [a cryptocoin tracing company]) did fall a little bit, it fell from just over a billion dollars a year to just under a billion dollars a year.

And with fewer and fewer people paying up, that’s suggesting that actually more and more victims are probably getting compromised.

And, as you say, not with zero-days.


DAVID. Yes.


DUCK. Because if you’re a ransomware gang and you have a zero-day, you’re not going to burn it on a ransomware attack when you know jolly well that your affiliates will put in the hard yards to get people on the hook anyway.


DAVID. . Right!

I definitely don’t mean to… it just is not the topic of this podcast, or any of our judgments, to look at what happens to those who’ve been attacked by something like a zero-day.

Because, honestly, if someone who possesses such an attack is after you, the tables are turned; it’s just not even possible, honestly, to defend against some of that stuff.

But people get way too distracted by those exotic exploits that make the news.

What’s happening here actually is a wild dereliction of duty among technology management.

That’s really what’s happening.

Most of these places that get attacked by ransomware… it’s not sophisticated enough to transfer the blame to someone else – to the attacker, for example – who has just done something that you couldn’t possibly have defended against.

That’s not the case.

That’s not what’s happening in 99.9% of these cases.


DUCK. Indeed.

When people get hit by ransomware, I will unfailingly feel sorry for them for their position as victims.

The criminality is inexcusable, and if caught, those criminals should be punished to the full extent of the law.

I’m not suggesting that should not happen in any way.

But the legal systems in many countries have established, notably in the UK, that you can be the victim of a cybercrime, and you can get sympathy on that account, but you can be held to account and possibly even fined a significant amount of money for not taking the precautions that you probably should have.


DAVID. Yes.


DUCK. So where do you think all this is going, David?

Do you think other countries will follow the Australian model?

Or do you think more and more countries may follow what you might call the US model, where payments are not banned, but some really high-profile ransomware gangs are sanctioned so you’re not supposed to pay them.

Do you think that that kind of sanctioning can work, or does it just lead to all kinds of weasely backdoors by victims and crooks alike?


DAVID. Even the Australian scheme is going to result in weasely backdoors.

[LAUGHS]

That is the way of capitalism; I don’t really see that changing.

I do see this as something which will be adopted because it is practical, and because I think, as a baseline for many more incidents that we have today, which is the best criteria we can apply to this, we will have information being fed to law enforcement.

We will have information being available to analyze how these attacks are being perpetrated; how the money is flowing out of these corporations; and then, to some extent, who is getting attacked.

Among important sectors (I think that’s why Australia called out critical industries), it really is important to know, regardless of the money involved, who is being victimized.

That can really help enforcement efforts.


DUCK. Absolutely.


DAVID. What I want to see is this result in data flow to enforcement, which reduces the likelihood that these attacks are successful, for reasons that may not involve improvement in hygiene, because I don’t know…

I’m just super-cynical, honestly, about people’s ability to help themselves.

Now, that said, there may be some effect here as well, in terms of a company investing in hygiene because they’re going to have to disclose the next time something happens.

Again, that goes back a bit to the penalty, and I think that AU$20,000 is probably not enough to make a company say, “OK, well, we need to do our backups better, or else.”

The reputational damage might be enough, but companies are really hard, people are really hard, at assessing risk.

And so I feel like if this does get adopted… I would be happy to see it adopted elsewhere, but I would like to see some kind of scalar penalty that would disincentivize an especially large and valuable company, from saying, “Well, OK, I’ll pay the $20,000 ‘fee’.”


DUCK. I think in most of these Australian regulations, there is some baseline penalty, in sort-of government penalty units (where they change the value of the unit; they’re sort of an exchange rate, based on how much they think they should be worth).

I think there are procedures by which that can be scaled up.


DAVID. OK.


DUCK. And I guess if you don’t disclose when you should have, and you do undergo the processing and the fine, then it will come out in the wash, and everyone will get to know about it, because that will be a matter of public record.

So maybe that was part of their thinking.


DAVID. Perhaps.


DUCK. If you disclose this, then you will be helping us to help you, and everyone else, in the future.

But if you don’t disclose it and we catch you out, then you will be hoist by two petards, if you like.

You’ll have to pay the penalty, and the public will get to know that you willfully avoided something that you should have done.


DAVID. Yes, I hope that that is how that works.

What we’re after here is ultimately fewer breaches; is ultimately fewer ransoms.

But I don’t know that I can promise a change in behavior among these companies.

So, at the very least, the less good – but still very attractive – outcome is that we have better data on when this is happening, and how.


DUCK. Indeed.


DAVID. And that could potentially help law enforcement, and potentially improve the ecosystem.

So we’re all in a place where this doesn’t actually happen as much, because the criminals can’t get away with it as easily.

We know what their bank accounts are; we know their methods; we know their tumblers; whatever.

However it is that this is being perpetrated, it becomes a matter of investigative record rather than just a secret, black-holed by the company that doesn’t want to hear about it.


DUCK. Having said all that, David, do you think that in some ways you could argue that this regulation is kind-of tautological or redundant?

Because, if you have had a ransomware attack, whether it’s paying to decrypt data that was scrambled in place, or paying hush money not to have the nature of the breach disclosed to the world at large, then surely you’ve had a breach either way?

And most countries do have data breach regulations, and therefore shouldn’t these things be disclosed anyway?

Or do you think that the idea is to close one more way of weaseling out, of saying, “Oh well, we didn’t see the data make it into the wild, so we didn’t consider it a breach; we just considered it an intrusion, which is not the same thing”?


DAVID. Intrusion regulations would be altogether harder to define.


DUCK. Yes, you’re right.


DAVID. And so I think that they have focused largely on the outcome of intrusion thus far.

California privacy, GDPR… there are states that have concerned themselves with the data that is being stolen, the data which is being made available.

None of them concerned themselves with the means of acquiring that data because the means are so manyfold that you really could wrap yourself in knots trying to build a regulation around it.


DUCK. In the same way, you remember, in the early days of anti-spam regulations, some countries said, “Oh well, any unwanted email greater than so many emails per hour or minute or whatever is deemed spam.”


DAVID. Yes.


DUCK. So the crooks went, “OK, if it’s 50 an hour, we will do 49, and we’ll keep below the limit, and then you can’t touch us.”

And so there was this continual cycle of how do you actually define what’s bad and good?

Here, however, it’s simply saying, “If somebody is squeezing you for money based on a cyberincident, then that’s what we call ‘ransomware’ and you have to tell us.”


DAVID. Yes.


DUCK. As you said, it’s results-based, and it’s not really open to negotiation, is it?


DAVID. I think it’s the only way you could write it and have it be a practical regulation.

You’re just going tie yourself in knots otherwise.

But that’s OK, because the results-based stuff could drive the behavior you want anyway.

I don’t think companies willfully are giving away their data, right?

This is not something they actually want; it is something that happens to them.

So, what we can do is close the avenues by which this could happen to them in a sort of ‘unmanaged’ sense.

If payment is more painful now, or disclosable, maybe they will take some effort to not have to pay, or at least maybe the message that payment should be a last effort… not a bulwark, not some kind of indemnity, right?

If a business today, prior to this Australian law, is viewing payment as always on the table, and basically only the surface value of that payment is at risk, well, then they could make a call that says, “I’m not going to spend $2 million on backups. I’m going to spend $2 million on some cybercriminals when they steal my data, and I’ll get it back, so I don’t have to worry about backups.”

That’s the narrative that we must stop!


DUCK. Absolutely.


DAVID. A business needs to realize that they should spend $2 million on backups because it isn’t just going go away when the problem happens and they spend $2 million on criminals.

It’s not an equivalent expenditure.


DUCK. Absolutely.

And gambling with your employees’ or with your customers’ data, in the hope that this year you might get away with it and you might not get breached so you won’t have to pay, is just not acceptable.


DAVID. Absolutely, yes!


DUCK. That rhetorical question I asked at the beginning: “Can you trust the crooks?”

Well, obviously you can’t.

Thanks to the Trellix report, we’ve got some specific evidence that shows examples of how this didn’t work out.

But let’s hope that more countries do follow Australia’s lead, and actually take that GDPR or Cyber Resilience Act approach [the EU CRA, now law], saying, “If you won’t come to the party with the carrot, then we’re going to have a stick to make you comply.”

David, thank you so much for your time.

Hearing your passion come through, and your strong words about how to deal with cybercriminality, in particular that payment should be your 17th…


DAVID. It keeps moving back, but I’m not opposed to that!

[LAUGHS]


DUCK. Thank you so much.

Thanks to everybody who tuned in and listened.

Don’t forget to subscribe to TALES FROM THE SOC in your favorite podcast feed, so you know when each new episode drops.

Please share with your friends, family and colleagues.

Don’t forget you can also find more excellent and educational content on https://solcyber.com/blog.

Thanks once again for listening, and remember…

Until next time, stay secure!


Catch up now, or subscribe to find out about new episodes as soon as they come out. Find us on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.


Learn more about our mobile security solution that goes beyond traditional MDM (mobile device management) software, and offers active on-device protection that’s more like the EDR (endpoint detection and response) tools you are used to on laptops, desktops and servers:

Tales from the SOC: Taming the ransomware beast | S1 Ep013 - SolCyber

Paul Ducklin
Paul Ducklin
06/26/2025
Share this article:

Table of contents:

The world doesn’t need another traditional MSSP 
or MDR or XDR.

What it requires is practicality and reason.

Related articles

Choose identity-first managed security.

We start with identity and end with transparency — protecting where attacks begin and keeping you informed, with as much visibility as you want. No black boxes, just clear, expert-driven security.
No more paying for useless bells and whistles.
No more time wasted on endless security alerts.
No more juggling multiple technologies and contracts.

Follow us!

Subscribe

Join our newsletter to stay up to date on features and releases.

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.

©
2025
SolCyber. All rights reserved
|
Made with
by
Jason Pittock

I am interested in
SolCyber XDR++™

I am interested in
SolCyber MDR++™

I am interested in
SolCyber Extended Coverage™

I am interested in
SolCyber Foundational Coverage™

I am interested in a
Free Demo

12140