Join Paul Ducklin and SolCyber CTO David Emerson as they talk about the human element in cybersecurity in our podcast TALES FROM THE SOC.
How do you keep software engineering safe and secure? How you do stop the endless upselling? Do you really need those heated seats?
Find out the answers in this plain-speaking episode, as Duck and David challenge the status quo without losing their sense of humor or their positive attitude.
If the media player above doesn’t work in your browser,
try clicking here to listen in a new browser tab.
Find Tales from the SOC on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.
Or download this episode as an MP3 file and listen offline in any audio or video player.
[FX: PHONE DIALS]
[FX: PHONE RINGS, PICKS UP]
ETHEREAL VOICE. Hello, caller.
Get ready for “Tales from the SOC.”
[FX: DRAMATIC CHORD]
DUCK. Hello everybody.
Welcome back to “Tales From the SOC.”
I am Paul Ducklin, and I am joined by David Emerson, CTO and Head of Operations at SolCyber.
Hello, David.
DAVID. Hi there.
DUCK. David…
As you know, recently I did one of my weekly 60 Second Security videos about a software bug in some ransomware that made it easy to crack, without cracking the encryption.
They just did a shabby job with the random number generator library, so there weren’t many different passwords it could choose.
Oh, dear!
What a blunder!
However, that’s exactly the sort of library blunder that is a real problem for legitimate software, isn’t it?
“My programming library has a function called Randomize
; I’ll just call that.”
“Surely with a name like that, it must be fantastic?”
So how do you avoid suffering from the problems that this ransomware did…
…in code where it matters an awful lot more?
DAVID. There are a number of things you can do.
I would say, first and foremost, that the concern cited is best mitigated with ‘intentional design.’
Too often, I see codebases that are sprawling estates of no particular intent.
I really think that that’s most of the evil when it comes to, “Oh, I’m going to take this library; it generates a random number.”
“That random number looks random, so that’s what we’re going to use now.”
And then, worse yet, often the developer next to you won’t even use the same library to generate their random numbers.
They’ll pull in another one.
And who knows what the algorithm is for that?
That actually is a pretty basic issue that I would sort out with intentional design.
From there, there is a spectrum of potential remedies, from linting, to code quality analysis, to automation of the pipeline.
We can talk about all of those, but you have got to start with intentional design.
And so, when you came up with a feature that required a random number…
…why wasn’t there an *intent* behind you pulling your chosen random number code that consisted of more than just, “OK, this looks like a string and it seems to be random.”
DUCK. So it’s a little bit more than saying, “Hey, what we need is a library that does random numbers.”
What you need is to decide what you want, and then find a library that actually meets those requirements, rather than merely having the right sort of name.
DAVID. Yes, and find a library that you can consistently use for your other features that are also intentionally designed.
Again, I think the ‘expedient implementation’ is oftentimes what you see when you’re pointing at a security vulnerability introduced by a library.
The choice probably wasn’t even as thoughtful as, “OK, what would be the most random of our random number generator choices?”
Or, “What would be the most secure?”
It probably wasn’t even that thoughtful.
It was probably, “I need a random number.”
“So I Googled and I found this on StackExchange, and it created random numbers for me.”
DUCK. The problem goes a lot deeper than that, doesn’t it?
Because it’s increasingly easy, with today’s programming languages, just to go import audio
, given that it’s an audio library you want.
And somewhere in amongst all of that, there’s some kind of random number generator thing that might get pulled in, or a hard disk reading tool, or an encryption algorithm, or any sort of algorithm.
There’s the law of unintended consequences.
You think, “Well, I’ve chosen a well-defined audio library.”
But who knows what other things are dangling off the bottom?
How do you control that?
DAVID. It goes back to intentional design again, at its core.
You can decide that, “This is a more parsimonious or focused library than that.”
DUCK. So, a sort of less-is-more approach?
DAVID. Yes, ‘less is more’ in a direct sense, but there is also ‘less is more’ in a design sense.
Broadly speaking, did you really need to pull in a library to accomplish what you’re trying to do?
Are you going to be able to call that library that you just have now vetted to some extent, and have decided meets your needs?
Are you going to be be able to use it again and again and again, rather than one third of your estate is based on this audio library; one third is based on this audio library; and one fifth is based on that, so it’s a mess.
Now we get into dependabot
[GitHub’s automated dependency checker]; static analysis; suggestions from various vulnerability scanning tools that you upgrade a library because a vulnerability was found deep in some dependency that’s actually a cryptography module…
I think these tools are great!
I love static analysis; I love dependabot.
You really enter a different realm, though, with your code, in terms of weighing regressions against suggestions.
And it’s not always trivial.
Dependabot can come up with really just an unholy number of things in a reasonably sized code base.
I guess, “Are you feeling lucky?”
Are you just going to hit [Merge]
?
Or are you actually going to weigh it against regressions?
In any case, the more that you have, the more that you have to do to maintain what you’ve got.
The aversive conclusion of pulling all these libraries in that may have dependencies is that if you’re doing it right, you’re now also assessing a huge surface area.
DUCK. I suppose, if you’re lucky, everything breaks early on, so you notice in testing, and you have no choice but to fix it.
But it’s quite hard to dot all the I’s and cross all of the T’s in advance and yet have regular releases for things like security updates, isn’t it?
Even though you’re not doing CrowdStrike’s, “Hey, let’s test it on the customer!“ [LAUGHS]
DAVID. It’s impossible.
You just do as much automation as you can.
Automate everything in your pipeline that you possibly can; write good tests; expect the good tests are written at the time of new functionality.
It sounds really simple.
But it’s not really that simple.
At the end of the day, the more coverage that you have with your testing, and any time you have a release where you expect that coverage is being analyzed…
…the more assured you can be that, if you find something, it’s probably going to be relatively small, or relatively easily implemented in your test suite so that you find it in the future.
At a baseline, you should be doing code formatting, linting…
Those sound silly, but also they cost you almost nothing, because a robot is gonna do it for you.
So get a linter in place; do some code quality analysis.
That code quality analysis might sound dumb today.
It might even be an initial lift to get things in place.
But now you’ll know when things are unnecessarily duplicated in your code, or when the spacing is off, and so on.
And the exercise of putting linting, code quality analysis, and static analysis in place is oftentimes coincident with the exercise of putting automated testing in place.
And so, having that all bundled into your pipeline gives you the ability, and then also fewer excuses, for not doing so when it comes to automating your code tests.
DUCK. Now, that’s not an excuse for saying, “Well, we won’t bother with human-oriented code review.”
That’s really just, “Must be at least this tall to go on the ride,” isn’t it?
DAVID. Yes.
It’s also harder to make claims about human-oriented testing.
Because without knowing what the product is, the human-oriented testing fills in the gaps.
It’s absolutely necessary.
It’s what CrowdStrike didn’t do, as an example.
DUCK. I believe they did admit that they assumed their data *testing* tool and their data *consuming* tools were somehow in lockstep.
And, of course, they weren’t.
DAVID. It is hard to make claims about what practical testing looks like for any given product.
If you’re Apple, sure, maybe your hardware scope isn’t that large because you fully control the entire stack.
But it really does depend on the product.
API interfaces tend to be…
DUCK. [INTERRUPTS, FEIGNING INCERDULITY] Did you just say “API interfaces”?
DAVID. Well, yes, I mean…
[EMBARRASSED] Oh, I see.
DUCK. [SATIRICALLY PEDANTIC] “PIN Number.”
“CD Disk.”
“RAM Memory.”
[LAUGHING] Sorry, David.
DAVID. [LAUGHS]
I mean, it’s something that you could fully document, right?
And maybe there are some ways in which humans interface with an API that might vary, or in some way trigger it into a bug.
But the fact of the matter is that you probably know the entirety of its functionality, or can document the entirety of its functionality.
And so then code testing for something like that is much easier.
Your product has to be fit for purpose, without being excessively complex, before you can make any change at all.
And so I think going back to intentional design, going back to, “Simplify so that you don’t have more code, which means you have more to do.”
Those kinds of practices will, as much as you can execute them, make it more possible for you to respond to something like, “Oh, there’s this finding, and now we need to change the library we use, or we need to change our method here.”
It also makes it so that a change to a method or to a library is something that can be consistently applied across the entirety of a codebase.
DUCK. Adding stuff is often exactly the wrong thing to do!
And this idea that, “Oh, look how many millions of lines of code we’ve got; look how many more features we’ve added”?
Sometimes *removing* code is important, for reasons that should seem obvious.
DAVID. Yes, removing code is very important.
You should be doing that every now and then.
There are other forms of simplicity too.
Some people like to put arbitrary limits, for example, on the size of a function.
“Don’t make it more than a certain number of lines.”
Of course, it depends on the language, and it depends on your sensibility.
But there’s a truth there, which is that a function shouldn’t be an inscrutable operation that you’re calling.
You shouldn’t be calling a function that does eight billion things, but you’re just using one of those, because that’s going to be very, very difficult to analyze.
It’s going to be very difficult for other people to learn.
No!
Make those things much smaller.
And that might actually make your codebase look larger, because now you’re duplicating some stuff.
But, at the end of the day, it will be more secure because the individual functions in that particular piece of code are more well-defined.
So, simplicity can take a number of different forms.
It isn’t always absolute number of lines of code, but it often would be correlated with that.
And I think that cleanup effort, I mean, as is any kind of technical debt…
…that effort is so often deprioritized.
Almost no company, even the well-intentioned, makes enough time to handle technical debt.
That’s just how it is.
DUCK. David, do you want to just explain for listeners what we mean by ‘technical debt’?
I don’t like that term; I just think it means that you’re paying for the sins of the past when you took giant shortcuts.
But it is quite a well-known term.
So how do you understand it, and what do you do about it?
DAVID. The closest simple term that I can use for technical debt that would be accessible to almost anybody who has maintained something is…
…’Maintenance.’
Unexciting maintenance activities, which you needed to do in order for the thing to keep running.
Technical debt is a fancier form of that.
It’s dereliction of duty with regards to maintenance.
If you aren’t performing regular cleanup in your code.
If you aren’t performing regular analysis of what your code is doing, and whether there are ways or methods of doing that thing better now than when you wrote it…
So, technical debt can take a lot of forms.
It can also, by the way, be used as an excuse.
DUCK. Exactly!
DAVID. A bat to hit the non-technical over the head with, any time you just don’t feel like doing their feature.
So, I’m not excusing that activity; I think that’s irritating in its own right.
But the reality, more often, is that I see technical debt being used as a time excuse.
I see the time not being made to resolve what are fundamentally maintenance activities.
And then, eventually, a ghost ship of a product results.
DUCK. Yes, a sort of ghost ship with its holds absolutely jam-packed full of rotting produce that nobody wanted, even when you put it in.
But it was convenient because you needed some ballast at the time.
DAVID. [LAUGHING] Pretty much, yes!
DUCK. That’s particularly significant in the cybersecurity industry, isn’t it?
As the SolCyber website likes to say, with giant air-quotes, “More tools, more tools.”
And as Amos the Armadillo [the SolCyber mascot] likes to say, “Tools alone don’t cut it.”
DAVID. Yes.
DUCK. Because it’s easy just to throw more things at things, as it were, and hope that some of it will stick.
It also gives you more SKUs, and it gives you more to sell, and it gets the sales engineers and the sales people excited.
But it can make things much, much more complicated for the customer.
What does a typical user, who doesn’t know much about programming but knows that sometimes programmers let them down terribly badly, such as when there’s an exploitable vulnerability…
…what can they do to make sure that the people they’re buying software from aren’t just throwing out ‘more tools, more tools’?
One solution is said to be, “Well, you need a Software Bill of Materials,” [SBOM] which is like the list of ingredients you get on food packaging.
But that doesn’t help if you don’t know what all those ingredients are about.
It’s just a list, and it could be a very, very long list.
DAVID. You know, you and I, when we buy a home PC…
…you’re not going to know.
So don’t think too hard about this.
Do your software updates when they come out; if there’s a security update, patch.
It is crazy how many people still do not patch their software!
Just get a cup of coffee – it’s not going to kill your day.
And the pain of regret is so much greater than the pain of discipline in this case.
Now, kick it up a notch from that.
A little bit more sophistication; maybe some supply chain know-how; some vendor procurement responsibilities.
What you should be doing is having a formal review process.
That review process could be, “We only use open-source things with a certain level of review processes around them, or with a certain number of users.”
Or, “We only accept certain certifications,” like maybe FedRAMP if you’re in a situation where you’re buying SaaS services for the defense industrial base.
There’s a number of different proxies for thoughtful implementations of software.
But the fact of the matter is that you have some kind of a standard, or you won’t be able to evaluate what you’re buying.
And then, to kick it up even another level from merely supply chain to, “What are you trying to accomplish as a customer?”
Because that question of intentional design rests on your shoulders as well.
Every poor design decision propagates into other poor design decisions dragged along into it.
As a customer, for example, imagine that you’re making an assumption that you need the entirety of an ERP [enterprise resource planning] solution in order to accomplish the goal of taking inventory.
Well, that might be part of your problem, if you’re dragging in the entirety of an ERP just to solve what could be a tightly scoped problem for you, which might be inventory.
And maybe all you actually needed was an Excel spreadsheet.
In reality, a lot of maturity comes with a lot of functionality because it makes it marketable.
SAP, for example, drags a lot of functionality into the market so that everybody can buy it and run their businesses on it.
It doesn’t mean that it was a secure, thoughtful way to apply software features to your business, to make your business run better.
So, go for thoughtful design throughout the entire chain.
DUCK. So, what can people do about that ‘more tools, more tools’ problem, particularly in cybersecurity, where there’s always something new?
I mean, I know that’s loosely true of many areas in software engineering, but it seems particularly problematic in cybersecurity.
It’s nice to sound as though you’re buying something that you absolutely need because there’s this great new threat.
In truth, a lot of the problems we’re facing are really problems that we knew about two, five, 10, maybe even 20 years ago, like passwords, and patching, and phishing, and all the other P’s that go along with cybersecurity.
How do you make sure, as an organization that doesn’t have a lot of cybersecurity expertise, that you’re willing to get the basics right first, instead of just being distracted by all the shiny new stuff?
DAVID. I think, first and foremost, have a framework that you’re following for your business.
That could be an industry benchmark; it could be a NIST framework if you have no better idea.
But have a framework that defines what it is you’re trying to secure, and how you’re trying to secure it.
Refer to that framework regularly as you’re planning the infrastructure necessary to effect all of the controls that you’ve decided you’re in scope for.
This is really important, because without that intent, you can easily creep, mentally, architecturally…
There are just so many ways in which you can end up procuring something because it sounded sexy at the conference.
My baseline advice is to have some kind of a guiding principle, whether that be a framework, or a set of controls, or even just business requirements.
And then from there, it branches a bit.
I actually often like the constraints of finance.
I feel as though the constraints of the finance organization, as much as they are frustrating…
…the constraints of a well-organized, well-run finance organization can oftentimes inform architecture in a very, very healthy way.
They can become a positive limit on your interest in deploying and procuring technologies that simply don’t align with the business vision.
And so, having a decent finance organization, and by decent, I mean, run well enough to know when they need to loosen the purse strings and actually spend some money, but not run with such lubricity that any request showing up on their desk is just going to be, “Oh, OK, we’ll give you the money for that tool.”
It’s necessary to have controls like that.
You have to have an observed decision each time you purchase something.
“Why are we buying this?”
“What purposes is it going to serve?”
“How is it furthering our mission?”
I think that the organizations that don’t have that are the ones that end up with just piles of chaff and tools for no particular reason; where the money they keep spending just snowballs.
DUCK. That’s a really good way of putting it, David.
So I’d like to finish up by making…
I hope you don’t think this is too frivolous: A suggestion that it’s as though, if you don’t really need the heated seats, *then you don’t have to check that box and buy them*!
Buy what you need.
Then you have a much better chance of actually deploying and using it correctly, particularly when it comes to protecting your computer environment.
Do you agree with that?
DAVID. And know what’s important to you!
You know, the heated seats?
They may not be your thing.
They’re not my thing, but you know what I love?
It’s soft-close doors!
So I am going to tick the box for soft-close doors.
[LAUGHTER]
It’s complexity, right?
But it’s OK if it’s something that is intentional; if it’s something that you wanted; if it’s something that your business needs.
DUCK. So, cybersecurity is a value to your business.
And you should treat it as a value to your business rather than a cost.
But that doesn’t mean that you should just throw limitless money at it because some sales person is telling you to do so.
DAVID. Exactly!
Heated seats: No.
Soft-close doors: Yes.
DUCK. [LAUGHS] David, I think that’s an amusing but easy-to-remember point on which to finish.
Let me, as usual, say thank you so much for your time.
And thank you to everybody for listening.
Don’t forget, if you would like to get in touch with SolCyber, you can simply send an email to amos@solcyber.com.
That’s Amos the Armadillo, the security metaphor mascot of the company.
And if you want to read more content that will help you learn about community-oriented security stuff without being hard-sell, then please head to solcyber.com/blog.
Thanks for listening, and until next time, stay secure.
Catch up now, or subscribe to find out about new episodes as soon as they come out. Find us on Apple Podcasts, Audible, Spotify, Podbean, or via our RSS feed if you use your own audio app.