Home
Blog
Tales from the SOC: Be careful of clickbait | S1 Ep006

Tales from the SOC: Be careful of clickbait | S1 Ep006

Paul Ducklin
Paul Ducklin
11/11/2024
Share this article:

LISTEN NOW

Join Paul Ducklin and SolCyber CTO David Emerson as they talk about the human element in cybersecurity in our podcast TALES FROM THE SOC.

How many strudels do you need in your password? Why does MVP refer to a sports player who is the best in the team, but to software that wouldn’t even make it into the squad? And why do we call them APTs when most of them are just plain old Ts?

Find out the answers in this plain-speaking episode, as Duck and David challenge the status quo without losing their sense of humor or their positive attitude.

Tales from the SOC: Be careful of clickbait | S1 Ep006 - SolCyber

If the media player above doesn’t work in your browser,
try clicking here to listen in a new browser tab.


LISTEN IN YOUR FAVORITE APP

Find Tales from the SOC on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.

Or download this episode as an MP3 file and listen offline in any audio or video player.


READ THE TRANSCRIPT


[FX: PHONE DIALS]

[FX: PHONE RINGS, PICKS UP]

ETHEREAL VOICE.   Hello, caller.

Get ready for “Tales from the SOC.”

[FX: DRAMATIC CHORD]


DUCK.   Hello, everybody.

Welcome back to another episode of Tales from the SOC.

I am Paul Ducklin, joined as usual by David Emerson, CTO and Head of Operations at SolCyber.

Hello, David.


DAVID.   Hey there.


DUCK.   So, David, why don’t you kick off by talking about something that you’re very keen on, which is a recent announcement from NIST, the National Institute of Standards and Technology in the United States, to do with passwords?


DAVID.   I think we mentioned it, actually, in the last podcast.

It’s just a wonderful development.

NIST has finally decided to stop being prescriptive about the details of complexity of a password, or the length of a password, and so on.

Not that they don’t care at all, but that they’re focusing on mathematical complexity, not on theoretical minutiae, not on the prescriptive complexity of, “You’re going to have an uppercase letter, and a lowercase letter, and whatever.”

I think that is really positive.

I mean, not only is it sound mathematically, but I think it’s a sign of an immature field that we have, in general, all these prescriptive and ineffective government standards that get tossed around for cybersecurity.


DUCK.   The problem with all those password rules is that you can’t have an algorithm that specifies how to make something more random, can you?

“It must have a capital letter in it.”

Well, what is everybody who speaks the English language going to do?

They’re going to put it at the beginning, because generally we don’t have capital letters in the middle of words.

And when they say, “Oh, you need punctuation,” and the system won’t accept the password, you go, “Oh, I know what, I’ll stick an exclamation point or a period at the end.”

Like, who would have thought? [LAUGHS]


DAVID.   Yes, you’ve got to start making people responsible for outcomes.

There are some few companies that take cybersecurity seriously and make competent programs around their expertise, but the vast majority of the industry and its customers just aren’t there yet.

The vast majority of the vendors are still playing this highly diverse game of whack-a-mole, but not with an incentive aligned to actually holistically fixing issues.

And the vast majority of their customers, really companies, are still unsure whether they even want to make the investment.

They’re playing a game of risk mitigation.

The incentives just aren’t there to make the companies actually care about solving the problem, and it’s good enough right now simply to “try hard” and throw up your hands when you get breached.

I think that the industry will be mature when the government starts getting out of the minutiae, and starts regulating responsibility.

Something like,” Solve the problem as you will, but it’s not OK to get breached” would be a great next step, and we’re starting to see some of that.

For example, you have CMMC [the Cybersecurity Maturity Model Certification from NIST]… that will probably the closest in spirit to such a thing, if it truly makes it to production in the coming year or two.

I think that focusing on outcomes and focusing on responsibility is the way to go if you’re regulating this.


DUCK.   Yes, it was quite interesting to see that the SEC, the Securities and Exchange Commission in the US, fined four American companies relating to the good old SolarWinds breach four years ago.

This ironically included two cybersecurity companies, fined for basically telling half-truths about how badly they were affected by the breach; for not giving a full and frank analysis of what had happened.

The SEC just simply said that when you do these federal disclosures, [FORMAL VOICE] “half-truths are not acceptable.”

In those very plain words.


DAVID.   Yes, and I like that it doesn’t get into what the nature of that half-truth was, it just gets into the fact that it was a half-truth, and I think that that is much more helpful as a regulation.

I think it’s helpful to the industry as an expectation.

I don’t think it’s impeachable , but the problem is that people aren’t even doing the bare minimum.

They’re not making an earnest effort at being more secure.

You can just sort of say, “Well, I got breached, but I followed all the NIST guidelines. My passwords had a bunch of strudels in them, or whatever.”


DUCK.   [LOUD LAUGHTER]


DAVID.   It’s really not cybersecurity.


DUCK.   No, it’s just checking some boxes and seemingly doing the right things.

And, if you like, having something you can do to make you feel good.

“Busy work,” or whatever the term is.

I know we’ve spoken before about patching on this podcast, haven’t we?

About how the risk of not patching, generally speaking, greatly outweighs the risk of actually doing it.


DAVID.   “Everyone does not patch,” yes. [LAUGHS]

Which is an illness; it’s a sickness that we find.

The perversity of, “Oh, my Exchange server or my Active Directory server. These are too important for me to patch, because what would happen if they had an outage?”

I get the idea, but it’s not the reality.

They’re so important that they *must* be patched.

That’s the reality.

Systems like that absolutely, positively must be patched.


DUCK.   There’s a big difference between saying, “Well, maybe I will stage my anti-virus update so that I do a few in the first hour, and then a few in the next couple of hours, and then the rest at the end of the day…”

There’s a big difference between that kind of staging and saying, “You know what? I’m going to put off my Windows patches or my iPhone patches for three weeks, or three months, or for so long that eventually I forget that I even needed to do it.”


DAVID.   Yes, I don’t mind people making the claim that you shouldn’t have an unstaged monoculture of version control, where you’re basically just chucking patches out to your entire environment all at once.

I get it – that’s not a great idea!

But there’s a happy medium somewhere.

We allow ourselves to do things that a civil engineer would be embarrassed to do.

Things that they would probably be imprisoned for doing!

In tech, “minimum viable product” [MVP] is default practice, and oftentimes the definition of that is stretched so thin as to mean “almost impractically vulnerable to security issues, to performance issues, and so on.”


DUCK.   Yes, I’ve always despised (yes, ‘despised’ is the right word) that MVP idea.

And I’ve always found it such a huge irony that MVP, in the sense of “sneaking through stuff that plainly isn’t good enough” in software engineering, is the same initialism that we have for Most Valuable Player.

The person who’s done the very best job in a sports team gets the same acronym as a product that, quite frankly, isn’t fit for purpose.

What would it take to shake software engineering out of that idea of, “Oh, well, we’ll make something just about works. We’ll shove it out there. Once it’s in the wild, we’ll fix it”?

How do you change that culture?


DAVID.   Regulation of outcomes.

I think it’s the same thing.

You know, in software development, there’s are these minutiae you can get lost in.


DUCK.   Yes.


DAVID.   Things that you should do: linting, and all that other stuff: ultimately, it’s good practice, and nobody’s ever going to say it’s not.

But if you’re regulating minutiae, you’re permitting the actor that is truly driven by some other misaligned incentive – such as just profit, pure profit…

…you’re permitting them to find loopholes.

It’s just an inevitability, and I don’t really think that you’re going to solve it by coming up with more minutiae.

I think you’re going to solve it by expecting that the products are fit for purpose.

The MVP concept works if it’s genuinely pursued as lightweight development of something that is fit for purpose, released to market, and then iterated upon.

That’s the part that works.

The part that *doesn’t* work is building the plane while you’re flying it.

That doesn’t make any sense at all, and that’s how MVP gets interpreted.

It’s just essentially a misaligned incentive.

There’s no reason to build a product fit for purpose when you can sell a product that may entirely be falling apart on the customer, and there’s no real downside for that.


DUCK.   Now, David, there’s a school of thought that says that when it comes to cybersecurity, you could actually just forget about technology, all of the “more tools, more tools,” and solve the issue entirely through regulations relating to liability.

That seems like a bit of a one-sided view, but do you think there is some truth to it?

Just make people liable for the mistakes that they bake into their software, so that they have a strong incentive not to bake those mistakes in any more?


DAVID.   It would be better than a lot of what’s going on now.

But I think that you would see probably a pyramid scheme of liability like you have in shipping.

I think you would eventually just get yourself into a situation where all anyone is trying to do is abstract themselves from liability, to essentially shift liability, and that shifting could become so abstract as to be unsupported.


DUCK.   Right.


DAVID.   But I don’t think it would be a bad idea.

The spirit of the idea is there, in the sense of, “We’re not going to tell you what a good password is, but don’t make your passwords the only thing that you’re doing because you’re going to be liable when you get breached.”

That’s the sort of thing I like to see: some kind of innovation.

People trying something that isn’t written into some minutiae, but if that means they don’t ever get breached, that’s good enough for me.

You know, it doesn’t have to be RSA-4096 all the time.

It might be something social.

It might be something like MFA [multi-factor authentication], which honestly is not technically impressive, but is essentially just a practical defense.

I think that’s the kind of optimization you get when you’re concerned about outcomes.


DUCK.   So, do you think that one of the problems with this liability-based approach is that it would just end up with companies not writing better software, but actually diving, as you say, into a whole load of insurance and reinsurance stuff?

Like we sometimes seem to see with ransomware, where people go, “Oh, well, the crooks want a million dollars. I’ve got cyber insurance, so if they’re willing to pay, maybe we’ll just go down that route, and that’s quicker and easier than not getting breached in the first place.”


DAVID.   You will definitely see that.

I don’t doubt it.

Hopefully, the industry is not so set in its ways, and it will recognize the unsupported premium.

And I think the cyberinsurers are smart.

I think we will see them updating their model to simply say, “No, at any cost, we won’t do it,” or, “Here’s some absurd premium that we know you won’t accept.”

But it’s early times, and I think cyber insurance may be morphing into a product that is not for everybody, that may not be accessible to certain industries or to certain kinds of businesses, because of their prevailing practices and the data that they handle.


DUCK.   Your thinking there, as I understand it, is that if you want to get cyber insurance in the future…

…you should get yourself into a position where it actually makes sense to insure you, because it’s probably going to be a genuine disaster that needs to get paid out for, rather than that you’re just taking the insurance so you can continue driving badly.


DAVID.   I think I’m arguing for actuarial responsiveness, really.

In the future, if you’re looking for cyber insurance, don’t be surprised if you get told that the practices of your industry, or the prevailing practices of your profession, such as software development, come with a price.

Because software developers develop critical systems in some industries that would not pass muster if they were bridges, so to speak.

And that might simply be the way that industry acts.

Then you might think, “Oh, well, gee, we should maybe spend the money on better software development so that our insurance premiums go down?”

That feedback loop has to exist.

Because as long as that feedback loop doesn’t exist, the answer will always be, “Just by more insurance, and then we don’t actually have to fix anything.”


DUCK.   And do you think that banning ransomware payments would actually help by forcing people to take better precautions so they don’t get hit in the first place?

Or will it just drive the whole thing underground?


DAVID.   I’m actually conflicted on that.

I believe that we should probably not ban ransomware payments.

I believe we should require their disclosure.


DUCK.   I think I’m with you on that.

I think that’s a good way to do it.

After all, it’s easy to say, “How dare you pay” when you’re not the person whose business is looking down the barrel of the gun from the wrong end.


DAVID.   Yes.


DUCK.   Well, we’ve got those SEC rules now saying, “Hey, no half-truths.”

So if you have a ransomware attack, you can’t just say, “Oh, well, we had an ‘incident’,” close air quotes, “and now it’s fixed.”

You need to come out and explain what happened, and why it happened.

And in my mind, most importantly, what you are going to do that would give people confidence that it’s much less likely to happen again.


DAVID.   Yes, I totally agree.

Payment is inevitable sometimes.

But you must disclose.

And I just think that’s for the health of the market; I think it’s for the health of the practice.

Because where we’re at right now is in the most cynical realm of, “I’ll just pay the ransom.”

That’s insane!

Companies should understand that they should not be paying the ransom.

They should have the freedom to do so if they want – I think it’s an affront to freedom to say that you cannot pay. (I don’t even know what that means, because they’re still going to do it.)

But if you’re forced to disclose, at least you’re creating this market in which hopefully the criminals ask for more money, and pretty soon, just like the insurance premiums, it becomes unsupportable.


DUCK.   I’ve always seen ransomware attacks, even when no data is explicitly stolen and used for blackmail by the crooks, as a breach.

After all, you’ve had crooks in your network, prowling through it, installing arbitrary software wherever they wanted, creating accounts to do all this stuff, and then unleashing their malware…

…how is that not a breach?

Even if they didn’t explicitly set out to steal your SQL databases, they’ve still been wandering around, and probably have a better map of your network than you do.


DAVID.   Yes, of course it is.

It’s definitely still a breach.

But the root point here is that paying the ransom should be culturally understood to be a poor response; to be the thing that you should not have had to do.

Even if, maybe, you do have to do it, it should not be your first option, or your second, or your third, or fourth.

‘Indifference to inconvenience’ is the spectrum that we should be operating in with something like ransomware, which is fundamentally not a technical challenge.

It’s not something that is especially sophisticated.

It’s just a gangster tactic.

I mean, it’s really quite simplistic and brutal.

And the reason that we’re even talking about paying a ransom of millions of dollars is the operational unpreparedness of the victims.


DUCK.   In a way, those recent SEC rulings that we just talked about made this point.

They sort of said, “It is possible to be the victim of a cybercrime, and for people to feel sorry for you, and to be aggrieved at what has happened *to* you…

…but also to be aggrieved *at* you for what you did afterwards, and for the things that you could’ve, should’ve, would’ve, but didn’t do before.

You could be a witness in the trial against the cybercriminals who attacked you, and yet also end up as the defendant in a civil trial where plaintiffs are coming after you saying, “I really didn’t expect you, of all people, to let this happen.”


DAVID.   I see no issue with that.

I love markets – and most companies pay lip service to markets as well when they are pleased by their outcomes – but let’s actually love markets at their core.

These companies are actually participating in a market.

It’s a *criminal* market, and as long as they do things like pay the ransom, the ransom should go up because there’s a market for paying that ransom.

And likewise, as long as they are not held responsible for the damage that they do by not securing their data, whether that’s ultimately resulting in a paid ransom or just lost data…

…as long as they aren’t held responsible for that, they’re simply no incentive to change.

I don’t see a reason we shouldn’t acknowledge that.

You should not have to pay the ransom; it should be an embarrassing thing that you must disclose if you are in a position of having to pay that ransom.


DUCK.   Yes, it really should be cringeworthy, shouldn’t it?


DAVID.   Mmm-hmm.


DUCK.   It shouldn’t be, [WOE-IS-ME VOICE] “Oh, golly, they were state sponsored actors.”

Which does not mean that they had a federal grant to go to drama school, or film school…


DAVID.   [LAUGHING] I wish.

That would be kind of cool.


DUCK.   So, David, to finish up.

What can we do alongside all of this to rein in what you might call some of the excesses of cybersecurity journalism?

So that this [DRAMATIC TONE] “advanced persistent threat” [APT] excuse doesn’t hold as much water as it has for so many years?

Where ‘advanced’ means we missed it; ‘persistent’ seems to mean [LAUGHS] we tried rebooting, but it didn’t go away, and ‘threat’ just means threat.

Nobody wants to say, “Oh, I got hit by the latest T.”

They want to be able to say, “Arrrrrgh! It was an *APT*.’

How do we rein all that in?


DAVID.   The journalistic FUD [fear, uncertainty, and doubt] is because that sells.

That’s how you get action news.


DUCK.   Yes.


DAVID.   This is just the cybersecurity equivalent of action news.

The academic FUD exists because that’s how you get your impact score up: getting published, and getting cited.

As long as those incentives are altogether tied into whether something is relevant to security or not, we’re going to have a problem.

And I really think that you’re not going to fix the academic incentive for ‘impact score’.

You’re not going to fix the action journalism selling, clickbait, and all that stuff.

You’re not going to.

You’ve got to find some way to unlink those things, though, from what represents actual security at an enterprise.

As long as they’re not linked, then you really can just ignore them.

I mean, I don’t mind that action journalism exists…

…I just don’t want it being the primary source of professional conduct standards!


DUCK.   Yes.


DAVID.   Anyway, I just think you have to find a way to decouple it.


DUCK.   David, I think that’s a great place to stop.

Otherwise, we could probably get so excited that we won’t stop ever. [LAUGHTER]

So let me say thank you so much for your time, as always, and let me say thank you to everybody for listening.

If you want to get in touch with SolCyber, you can just send an email to amos@solcyber.com.

That’s Amos the Armadillo, the SolCyber mascot.

And if you want to catch up on some great community-focused articles that tell it like it is in plain English, not trying to sell you something, then please head to https://solcyber.com/blog.

Thanks for listening, and until next time, stay secure.


Catch up now, or subscribe to find out about new episodes as soon as they come out. Find us on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.

Paul Ducklin
Paul Ducklin
11/11/2024
Share this article:

Table of contents:

The world doesn’t need another traditional MSSP 
or MDR or XDR.

What it requires is practicality and reason.

Related articles

Businesses don’t need more security tools; they need transparent, human-managed cybersecurity and a trusted partner who ensures nothing is hidden.

It’s time to move beyond the inadequacies of current managed services and experience true security management.
No more paying for useless bells and whistles.
No more time wasted on endless security alerts.
No more dealing with poor automated services.
No more services that only detect but don’t respond.
No more breaches caused by all of the above.

Follow us!

Subscribe

Join our newsletter to stay up to date on features and releases.

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.

CONTACT
©
2024
SolCyber. All rights reserved
|
Made with
by
Jason Pittock

I am interested in
SolCyber XDR++™

I am interested in
SolCyber MDR++™

I am interested in
SolCyber Extended Coverage™

I am interested in
SolCyber Foundational Coverage™

I am interested in a
Free Demo

9739