It’s 10 years since the legendary Heartbleed bug, yet security vulnerabilities are in the news as much as ever.
From information disclosure and DoS to RCE and EoP, we look at the sort of bugs that lead to cybersecurity trouble, the jargon you need to understand them, and why it can be a life’s work to keep on top of them.
If you’ve been involved in IT or cybersecurity for any length of time, or if you’re familiar with the history of fast-spreading computer worms, you’ll probably be familiar with the term buffer overflow.
That’s where computer software accepts input, usually from someone or somewhere it doesn’t trust, and before it gets to the point where it can validate that input to see if it’s safe to use…
…it runs out of memory to store the data it’s receiving, but carries on accepting it anyway.
The surplus data overflows its officially-allocated storage space, known colloquially as a buffer, into neighbouring chunks of memory that the software is supposed to stay away from, trampling on whatever was stored there already.
(The jargon word buffer, in case you’re wondering, seems to have been adopted into computer science in the early 1950s as a metaphorical borrowing from the rail transport industry, where buffers are safety devices designed to absorb the jostling impacts between the carriages of a train as it moves, or to bring a runaway train to an emergency stop before the track runs out.)
If you’re lucky, a buffer overflow could end up with the surplus data dumped into an area of memory that isn’t being used for anything else, and the error might not cause any problems.
But if you’re not so lucky, the memory that’s corrupted by the overflow could change the behaviour of some other part of the program.
The surplus data could inject rogue executable code where it’s not supposed to go; it could overwrite verified authentication information with untrusted data to let a rogue user sneak in; or it could trick the current program into going down the wrong code path, such as branching to the menu that gives access to logged-in users instead of branching to the part that kicks unauthorised users out.
As we mentioned at the start, some of the most dramatic malware outbreaks in history were triggered by sneakily-crafted buffer overflows that remotely grabbed control on the computer they were aimed at.
The infamous Internet Worm of 1988 made use of a buffer overflow in a system service known as fingerd
, and the the super-fast-spreading, fileless worms Code Red in 2001 and SQL Slammer in 2003 targeted buffer overflows in Microsoft’s widely-used IIS web server and SQL server respectively.
More recently, buffer overflow bugs in two coding libraries for processing image and video files caused widespread concern because many popular software packages, including browsers, inherited these bugs and therefore needed patching in a hurry.
Those bugs were: CVE-2023-5216 in Google’s VP8 video processing library, very widely used for web-based video chat, and CVE-2023-4863 in an image library called libwebp
, used by dozens of software packages, including browsers and image editors, to handle the ubiquitous WEBP image format.
At the end of 2023, we even had a buffer overflow in Google’s graphics library Skia
that was given the media-savvy name LogoFAIL after researchers discovered that a booby-trapped bootup image, apparently harmlessly added to a laptop to replace the vendor’s logo on the startup screen, could be used to subvert the much-vaunted SecureBoot process, where only software digitally approved by the vendor is supposed to run.
Bugs of this sort often give cybercriminals outside your network a way to trigger the execution of untrusted and unauthorised program code on your computer, typically without any popups or warnings to give the game away.
This sort of attack is known by the self-descriptive name of RCE, short for remote code execution.
RCE bugs are keenly sought after by cybercriminals, and some of them may be worth hundreds of thousands or even millions of dollars if offered for sale.
Sometimes, wealthy vendors such as Microsoft, Apple and Google buy up bugs of this type and use the knowledge to patch their products before disclosing the details of the bug to everyone else.
Unfortunately, unscrupulous bug-hunters may sell choose to sell their RCE bugs into the cyber underground instead, not caring if the buyers are spyware vendors, industrial spies, or other types of cybercrook.
That’s the trouble with bugs: they’re hard to avoid, and potentially catastrophic if discovered.
Even buffer overflows – a well-known class of security flaw that has been putting us in harm’s way for decades – are still a clear and present cybersecurity risk.
As you can imagine, this makes the task of staying on top of the problem as big as or bigger than ever.
Actually, if you’re a pessimist, things are worse than that, because buffer overflows are really just the tip of the iceberg.
And RCE flaws, often regarded as the ‘worst amongst equals’ in the world of software vulnerabilities, are just one of many different categories of security hole that cybercriminals can abuse to attack your network, your data, your staff and your customers.
Indeed, staying on top of the jargon terms used in bug reports and security advisories is a challenge all of its own, and that’s before you stay on top of the bugs themselves and the cybersecurity vulnerabilities they introduce.
To be clear, not all software bugs are considered serious enough to be vulnerabilities, a word that is used in cybersecurity in a way that matches its dictionary definition.
My trusty Oxford Dictionary of English says the following:
vul’nerable (adj.): exposed to the possibility of being attacked or harmed.
And any activity that’s intended to cause the very attack or harm that to which a vulnerability exposes you is dubbed an exploit.
Even if the exploit doesn’t always work; even if it sometimes draws attention to itself instead of achieving its goal; even if you’re immune because you’ve already patched; even if you aren’t (to the best of your knowledge, at least) running the vulnerable code yourself…
…it’s still an exploit, and it’s still important for anyone involved in threat hunting to be aware of its existence, to be aware of how it works, and to keep on top of how it can be detected and prevented.
There are five main vulnerability types you will come across, with names similar to the ones listed below.
The exact terminology may vary depending on who wrote the report, such as saying escalation of privilege instead of elevation, or leak instead of disclosure, but the meanings will probably match up fairly obviously.
If a bug makes it possible for an attacker to crash or disrupt a server or a service deliberately, or to get in the way of any software on your computer that you need to do your job, you’re looking at a DoS vulnerability.
DoS bugs are understandably often dismissed, or at least ‘derisked’, as a comparatively unimportant sort of vulnerability, especially if simply rebooting your computer or restarting the crashed service seems to solve the problem.
But you should never underestimate the potential danger of DoS exploits.
Firstly, by repeating the attack at a carefully-chosen frequency that leaves just long enough, say, for your crashed login portal to restart, only to crash again almost immediately, a criminal can keep you offline as good as permanently without needing to generate a huge amount of attack traffic on the network.
Secondly, this sort of disruptive attack makes an ideal smokescreen for cybercriminals to hide behind, distracting your IT team with an attention-seeking problem that diverts attention away from more dangerous and suspicious behaviour elsewhere in your network.
When a bug can be exploited to trick your computer, or one of your company’s servers, into giving away something it shouldn’t, you have a disclosure vulnerability.
The infamous Heartbleed bug, which was discovered 10 years ago this month, is a classic example of an information disclosure bug that turned out to be much more harmful that it first seemed, as we’ll see shortly.
Sometimes, cybercriminals don’t need an RCE bug as a way to run rogue code without logging in first, because they may be able to sidestep security checks through some other vulnerability.
For example, a server that has a ‘backdoor’ password coded into it for the vendor’s technical support staff can be attacked by any criminals who know or discover that password.
Notably, security bypass bugs in your browser may be a pathway to numerous other sorts of vulnerability, for example by allowing JavaScript code that’s supposed to be limited to your browser to escape from its ‘sandbox’ and read and write files on your hard disk instead.
Elevation of privilege means exactly what it says: that a user with restricted access to the system can somehow promote themselves into a more privileged class of user, often the SYSTEM
or root
user, without needing to know any passwords.
EoP bugs often require an attacker to be logged in first, so they are sometimes dismissed as much less significant than RCE , and not patched as promptly as other vulnerabilities.
But EoP exploits pose two considerable dangers.
Firstly, they increase the risk from insider threats, because untrustworthy staff members who already have some level of official access may be able to turn themselves into unregulated, unauthorised sysadmins.
Secondly, they are often combined with RCE exploits so that a code execution hole that gives minimal system access can be converted into a take-over-the-whole network attack simply by chaining the two exploits together.
As we’ve already explained, RCE exploits are sought after by cybercriminals because they typically provide an initial foothold somewhere inside your network.
The RCE gets them in to start with, after which an EoP gets them up and about, often letting them wander at will through your network, something that’s referred to rather clumsily in the jargon as lateral movement.
We have an understandable tendency to assume that RCE bugs always need patching first and that other, ‘lesser’ types of vulnerability can wait.
We also reasonably assume that buffer overflow holes involving overwriting data, not merely overreading it, are the truly dangerous ones, because they give attackers a chance to inject rogue code and data of their own choice, and therefore to make changes, not merely to look around.
But the infamous Heartbleed bug that we mentioned at the outset, which was discovered a decade ago in April 2014, was neither an RCE vulnerability nor a memory overwrite caused by a buffer overflow, but it turned into a cybersecurity emergency nevertheless.
Heartbleed exploited an incorrectly-coded OpenSSL ‘feature’ called HEARTBEAT that was supposed to boost performance and reliability by allowing one end of an encrypted connection to send a test messages to the other end from time to time to keep the connection alive.
These messages consisted of a two-byte number to denote the length of the message, followed by exactly that many bytes, such as: 9 bytes: HEARTBEAT
, or 5 bytes: HELLO
.
Unfortunately, OpenSSL’s Heartbeat reply code didn’t check that the length referred only to data inside the HEARTBEAT
network packet that had just been received.
An obviously bogus request such as 65,535 bytes: HELLO
would be accepted, and the sender would get back HELLO
followed by the next 65,530 bytes that just happened to be alongside it in memory at that moment.
Early hopes were that only inconsequential or low risk data would be disclosed due to rogue HEARTBEAT requests, such as fragments of encrypted data that could only be decrypted by the user who had sent or received them in the first place.
Researchers soon showed, however, that leaked data could include unencrypted snippets of other people’s data, and even private encryption keys that had been loaded by a web server itself when it started up.
Worse still, vulnerable web servers could be ‘bled’ of data over and over again, turning that 65,535-bytes-per-HEARTBEAT leakage into a giant data exfiltration risk that could allow individual attackers to collect gigabytes or more of sensitive data each in the course of a day.
The problem, of course, was not merely patching vulnerable software that you knew about, but figuring out in the first place which software on your network made use of a buggy version of OpenSSL.
The answer, very often, was, “An awful lot more than we thought.”
Being aware of any vulnerabilities and exploits that exist in the software you use is important, but you also need to be able to:
Given that mastering all the complexity discussed here will help you with just one aspect of cybersecurity, why not find a managed security service provider who can help?
Don’t carry on buying more and more tools and services from vendors who are badgering you to upgrade to close the protection gap opened up by the last product they sold you.
Find out how SolCyber is built different, and choose a security service where you can always talk directly to a human, and get help with anything.
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image of bugs from the Museum of Natural History in Oxford, England, by James Wainscoat via Unsplash.
Image of train buffers from the National Railway Museum in York, England, by Tim Johnson via Unsplash.