Supply chain attacks can be extremely hard to detect, let alone to prevent.
All the cybersecurity experts and all the cybersecurity vendors in the world were unable to stop the infamous SolarWinds attack, for example, so it was the fluency and completeness of how you responded to the news that really mattered.
We look at just how devious this sort of cybercriminality can be, and what you can do about it.
In the early days of computer viruses and other malicious software, or malware as we now call it, received wisdom stated that the best way to avoid infection was to steer clear of pirated software.
The problem with this advice, if we ignore the fact that it conveniently allowed people to tell themselves that illegally copying software would be OK if only they could find a way to do it safely, is that it wasn’t really true.
In those days, any copying could carry a virus infection along with it, whether you were copying programs or data, legally or illegally.
For example, if you legally backed up a program from its official distribution medium that you knew was uninfected because it had come directly from a trusted software vendor on a write-protected diskette, you might reasonably assume that your copy would be uninfected too.
But if the computer you used was itself already infected, the copy would often end up infected too, modified by the malware in the very act of copying it from a write-protected medium to a write-enabled one.
In fact, the primary vector for the spread of malware in the days before widespread internet access, when users shared data via the aptly-named sneakernet system of carrying diskettes from computer to computer, wasn’t just illegal copying.
The problem was any sort of uncontrolled, unverified copying at all: in jargon terms, copying that happened outside any officially vetted and trusted supply chain.
In theory, buying software from the supply chain was explicitly safe, while acquiring almost anything from outside the supply chain was implicitly risky.
However, to re-use a witticism we have quoted before, “In theory, theory and practice are the same, but in practice they are not.”
Despite the care you might expect to go into every stage of any official software supply chain, cybersecurity lapses have been annoyingly common over the decades.
Official floppy diskettes, for example, sometimes arrived pre-infected, often (and ironically) because of a disk duplication system that faithfully and precisely replicated official master diskettes that had been infected somewhere between the developers’ computers and arriving at the duplication machines.
Other notable supply chain malware blunders in history have included: pre-infected CDs that contained eternally write-protected malware; pre-infected USB devices given away by cybersecurity vendors at cybersecurity shows; pre-infected computers shipped directly from the vendor’s own factory; pre-infected digital picture frames, hard disks and other add-on devices; and pre-infected Android phones loaded up by unscrupulous distributors with adware, spyware, bloatware, and worse.
These types of ‘attack’ typically involved little or no subterfuge, but were caused by carelessness, incompetence, see-if-I-care greed, or some combination of those factors.
But in today’s internet-centric world, supply chain problems have evolved into devious, deliberate and treacherous attacks, often delivering brand new malware using a sequence of tricks that were plotted very carefully in advance, sometimes over breathtakingly long periods.
We’ll now take a look at some fascinating examples from the past decade or so, by way of reminding ourselves just how hard it can be to stop cybersecurity attacks that effectively ‘come from inside the house’, making them much harder to predict and to detect.
First, however, we need to go back to 1983, when computer science luminaries Ken Thompson and Dennis Richie jointly received the Association of Computing Machinery’s Turing Award for their foundational work on the Unix operating system.
Thompson’s acceptance speech, published in 1984 as the seminal three-page paper Reflections on Trusting Trust, described a particularly troublesome type of attack.
The example, in Thompson’s talk, was a method for creating a booby-trapped C compiler that would inject malware not directly onto the computer where you ran the compiler (which would potentially infect developers but leave everyone else alone), but into each and every program that the compiler compiled from source code (which could potentially infect everyone in the world to whom these freshly generated programs were distributed).
More worryingly, Thompson described a way to build this Trojan Horse compiler so that the booby-trapped version would continue to produce malware-infected programs even if the compiler itself were recompiled from its original, non-booby-trapped source code.
The research example that Thompson created was a malicious compiler that would:
login
program so that they secretly included a hard-coded backdoor password, such that neither the password nor the backdoor was visible in the login
source code.login
password, again such that neither the password, nor the backdoor, nor the compiler booby-trap itself, was visible in the compiler source code.In other words, once compromised, the compiler would silently re-compromise its own clean source code while building it, in order to regenerate a compiler than would continue compromising other programs created with it.
This tarnished the supply chain in a subtle way that could not be discovered by code review alone:
“The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.”
Thompson’s poisoned compiler theory was used in a very widespread real-world malware sample known as Win32/Induc
that first showed up in 2009.
Induc booby-trapped the Delphi programming language, which was very widely used at the time, notably by corporate software teams responsible for in-house Windows apps (and inevitably, given Delphi’s power and comparative simplicity, by malware authors).
Ironically, the Induc virus doesn’t do anything deliberately harmful on any computers used by non-developers.
But when an infected file lands on a computer that has Delphi installed, the malware does this:
SysConst.pas
, which usually contains data such as text strings and numerical constants relevant to the underlying system, but no actual Delphi program code. (Delphi source files have the extension .pas
because Delphi is a descendant of the Borland’s trailblazing Turbo Pascal compiler from the 1980s.)SysConst.dcu
, short for Delphi compiled unit, the executable form of Delphi library modules that are automatically added into programs that need them.Sadly, Induc lingered on for ages, even after it was well-known and reliably detected by most endpoint security products on the market, because the IT industry proved remarkably unwilling to learn the lesson from Thompson’s paper, which was then already 25 years old.
Many software development teams blindly refused to accept that their own in-house apps could possibly be infected, even after their own or their customers’ anti-virus scanners condemned them.
Instead of reflecting on trusting trust, many organisations simply accused their security vendors of what’s known in the jargon as a false positive or a false alarm, decreeing unilaterally that the files must be clean, and thereafter simply ignoring or suppressing any reports of Induc on their network.
The supply chain remained choked with Induc infections long after the virus should have been eradicated altogether.
More recently, in early 2020, the IT world was rocked by a double-barrelled malware attack against SolarWinds, a popular US vendor of network management software.
The two distinct malware samples in this latter-day supply chain attack were dubbed SUNSPOT
and SUNBURST
and they worked as a pair with devastating effect.
Sunburst (I’ll avoid all-caps from now on so I don’t seem to be SHOUTING all the time) was the malware that SolarWinds unwittingly introduced into its own supply chain and sent out automatically to many of its customers as part of a routine update.
The malicious code in Sunburst was both dangerous and treacherous:
virtualdataserver.com
, thedoccloud.com
and avsvmcloud.com
. (AVSVM sounds as though it might stand for anti-virus support vector machine, a perfectly plausible name for a AI-based malware protection system, given that support vector machines are a well-known sort of machine learning algorithm.)So far, so bad.
But the truly devious part of the attack was the special-purpose companion malware known as Sunspot, which the criminals deliberately and carefully implanted on SolarWind’s own software build servers.
Sunspot followed the same sort of now-you-see-my-source-code-now-you-don’t approach as Induc, but with even more subterfuge:
MsBuild.exe
to start. Short for Microsoft Build Engine, MsBuild is a widely-used tool used for controlling the compilation and construction of C# software projects, rather like using make
on Unix systems.InventoryManager.cs
, the malware sprang into action. (The .CS
extension is short for C Sharp, given that the #
character is inconvenient to have in a filename.)Reflections on trusting trust!
Unless and until you noticed the rogue Sunspot ‘build interceptor’ program, which was hidden in plain sight in an innocent-sounding file called taskhostsvc.exe
, the system would appear unaffected.
And unless and until you spotted the brief appearance and disappearance of temporary files called InventoryManager.bk
and InventoryManager.tmp
(the names of the temporary backup file and the booby-trapped source code file respectively), the build process would seem to proceed normally.
Typical software builds involve the creation and deletion of hundreds, thousands, or sometimes even tens of thousands of short-lived intermediate files with a dizzying array of one-off names, possibly with many MsBuild processes running in parallel to speed things up. Spotting this sort of transient anomaly is therefore not an easy task.
Once the Sunburst-infected executable file was built, apparently from unsullied source code files, the booby-trapped program would be sent, along with the rest of the output of the build system, into the remaining stages of the supply chain.
This involved adding an official digital signature to all files that needed signing, including the booby-trapped program; approving the new version for release; and then packaging it up and publishing it for delivery to customers.
(Remember that Sunburst lies low for 12 to 14 days when it first starts up, thus giving the booby-trapped package plenty of time to pass pre-delivery acceptance tests without displaying any obvious unexpected symptoms.)
A wide range of high-profile organisations, public and private, ended up infiltrated by Sunburst, from Microsoft to the US Department of Homeland Security.
Our last example of a supply chain attack was uncovered in early 2024 but actually unfolded slowly and deviously over a matter of years.
The XZ Utils compromise shows just how convoluted software supply chains can be, and therefore how hard they can be to protect.
The full story is reasonably complex (you can read detailed coverage here), but its devious, layers-within-layers nature can be told fairly briefly if we start at the end and work backwards.
The attacker, known only by the pseudonym Jia Tan, apparently wanted to insert a login backdoor in OpenSSH, probably the most widely-used remote access tool in the world – it’s a vital part of almost every Linux distro, of all BSD distros, of macOS, and of Windows.
OpenSSH is curated by a small and dedicated team from the OpenBSD project, who are well-known for putting security ahead of performance, and for vetting all changes to their primary codebase carefully.
It is highly unlikely that an outsider could trick this team into accepting new code that contained a password-bypass backdoor or a ‘zombie malware’ implant.
Jia Tan therefore targeted Linux systems based on the Debian distro, because Debian’s version of OpenSSH includes some non-standard modifications.
Debian uses a low-level operating system management toolkit called systemd
, which has its own customised logfile format that stores its data in compressed form, using the LZMA compression algorithm as implemented in a programming library called liblzma
.
(Other Unix system loggers use an uncomplicated text format, letting users choose to implement offline compression if they wish to save disk space.)
As a result, Debian’s version of the OpenSSH server software, known as sshd
, includes a supply chain dependency like this:
sshd <-- libsystemd <-- liblzma
In other words, Debian’s OpenSSH server could be attacked indirectly, as could the modified build of OpenSSH on Debian-derived distros and others with the same non-standard chain of dependencies.
Jia Tan therefore didn’t need to target the official sshd
codebase, curated conservatively and carefully by the security-centric OpenBSD team, or to target the widely-used systemd
project, overseen by an employee bankrolled first by Red Hat and now by Microsoft.
Instead, Jia Tan targeted the abovementioned liblzma
library, part of the tiny XZ Utils project, an open-source toolkit than ran largely on the time and goodwill of one man, who nevertheless regularly faced flak from the community for not responding quickly to the needs of his non-paying customers.
Into the project leapt the pseudonymous Jia Tan, who apparently first showed up as a willing contributor in the open-source community in late 2021, before joining the XZ Utils project about a year later.
Jia Tan put in sufficient effort to win the trust of the project’s creator and thereby to be approved as a committer: someone who is allowed make changes to the XZ Utils codebase.
In February 2024, the Jia Tan persona pounced.
Andres Freund, the Microsoft staffer who first spotted Jia Tan’s treachery, noted:
““Given the activity over several weeks, the committer is either directly involved or there was some quite severe compromise of their system. Unfortunately the latter looks like the less likely explanation, given they communicated on various lists about the ‘fixes’ [that introduced the backdoor].”
Via a profoundly devious supply chain pathway, Jia Tan’s malware was delivered so that:
liblzma
library on which Debian’s modified OpenSSH server depended.sshd
program when running as a system daemon, the backdoor would be activated.This made life tricky for malware researchers.
If you compiled the booby-trapped OpenSSH source code package in your own way; if you carefully extracted just the source code and not the test files; if you ran it other than as a regular system daemon; or if you tried to run it under a debugger or other software spelunking tool…
…the malware would either be omitted from the build altogether, or left dormant when the sshd
program ran.
Once activated, however, the XZ Utils backdoor was devious indeed.
Instead of embedding a hard-coded password that would let attackers pass illegally through the regular login process, the malware relied on intercepting the cryptographic ‘handshake’ carried out between client and server as part of getting ready to login.
If the digital certificate submitted by the remote system matched a special but invalid format known to the malware, Jia Tan’s ‘zombie code’ would be triggered.
The remote attacker could then instruct the compromised OpenSSH server to run any program it liked, which would probably execute with root privileges, commonly used by sshd
when setting up a new connection.
The underlying server, of course, would just see an invalid digital certificate, reject it, and write a harmless-looking log entry to report a failed login.
There would be no tell-tale server crash, no memory mismanagement, no buffer overflow or other signs of an exploit, and no record of a rogue login left behind in the system logs.
Ironically, and almost amusingly, Jia Tan’s fake digital certificates were internally digitally signed with a private cryptographic key and ‘validated’ by the malware itself.
This not only made tracking and logging the behaviour of the malware on a test system harder for researchers, but also stopped other cybercriminal gangs from muscling in on the action by finding infected computers and using Jia Tan’s backdoor for themselves.
As you have probably figured out already, supply chain attacks can be extremely hard to detect, let alone to prevent.
All the cybersecurity experts and all the cybersecurity vendors in the world were unable to stop the SolarWinds attack, though in that case, the cybercriminals kept their heads down and unleashed their tag-team supply chain malware as secretly and as surreptitiously as they could.
And Jia Tan’s attack went unnoticed until after it had made it into the wild, even though Jia Tan operated very publicly, going out of their way to make themselves known, and thereby to build up trust and authority.
Indeed, Jia Tan was publicly persuasive enough to win around (or at least to persuade the community to ignore the objections of) open source experts who argued that his rogue ‘test files’ were unsuitable for inclusion in the project and should be thrown out.
Nevertheless, with a positive cybersecurity culture in your team and your business, you will give yourselves a better chance of facing down security trouble of any sort, including even the most treacherous supply attacks:
liblzma
code might have ended up in your network so you could find it and remove it. Automated tools can help with this sort of discovery, but human-focused assistance is vital to make sure you know where to point those automated tools in the first place.Why not ask how SolCyber can help you do cybersecurity in the most human-friendly way? Don’t get stuck behind an ever-expanding convoy of security tools that leave you at the whim of policies and procedures that are dictated by the tools, even though they don’t suit your IT team, your colleagues, or your customers!
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image by Pete Willis via Unsplash.