Malware programmers often go out of their way to make their malevolent code look as innocent as they can in order to evade detection.
But what if they could write malware that didn’t just look good, but actually was good?
That would put the cat amongst the pigeons, because security software would find it hard to discourage even well-informed users from running it if it had some useful or legitimate purpose.
Way back in 1984, an early malware researcher named Fred Cohen was probably the first person to try to come up with a ‘good’ computer virus.
In a paper entitled Computer Viruses – Theory and Experiments, he presented a thought experiment describing what he considered to be a clear example of malware that “need not be used for evil purposes.”
His ‘good’ virus would spread from program to program, compressing its host file during infection and thus saving on disk space. (Remember that a typical hard disk in those days could store just 10MB of programs and data.)
Being a computer virus, it would find its way through your files automatically, saving more and more disk space as it went, presumably spreading ‘helpfully’ to other users with whom you shared programs, and saving them disk space as well.
But Cohen’s concepts didn’t pan out in the real world, and not only (as one punning observer has put it) because “no one has the right to wander into my house uninvited, even if their intention is to clean the windows.”
Cohen’s proposal was technically dangerous, too, because many programs can’t safely be compressed in this way.
Even back in the 1980s, plenty of software checked its own integrity before running in order to prevent tampering, so that Cohen’s ‘good’ virus would break that software and prevent it from running.
Worse still, self-decompressing software often behaves in a dangerously different way to its uncompressed original form.
It needs to overwrite itself in memory, which opens up a bunch of security risks associated with executable code that is writable as well as readable, and it typically needs to load any secondary code libraries it uses in a special way.
This makes the program behave differently from how it was designed and officially tested, and could cause it to fail suddenly and unexpectedly even if it apparently started up successfully.
Fortunately, these unavoidable objections spelled the end Cohen’s well-meaning but ill-thought-out plan to concoct ‘good malware’.
Another approach aimed at creating ‘good’ malware used the concept of modifying an already-widespread and troublesome malware sample into something very similar, but with the avowed intent of setting the second malware sample on a mission to counter-attack the first.
Could a second wrong somehow make a right?
One well-known example of this theory put into practice was a computer worm dubbed CodeGreen, which was rushed out in late 2001, following the dramatic outbreak of the CodeRed malware.
CodeRed hit in July 2001 when a devastating vulnerability in Microsoft’s web server IIS, short for Internet Information Services,, opened up a remote code execution hole that could be triggered by a single innocent-looking web request.
All the malware had to do was to use an HTTP GET
request to ask for a URL longer that the Microsoft programmers had left space for at their end, thus triggering a buffer overflow, and it was in, and away, and back out again.
CodeRed dedicated 99 parallel threads of execution to generating random lists of new computers to attack, and spewing out HTTP requests in an attempt to infect them in turn.
In modern jargon, this was a fileless, zero-click, remote code execution (RCE) exploit, and it spread very far and very fast, even though Microsoft had patched the bug a month before the outbreak kicked off.
A programmer going by the name Herbert HexXer soon got his moment of fame for proposing and publishing a ‘counter-worm’ dubbed CodeGreen that used the very same exploit to find and infect vulnerable servers.
HexXer’s code differed in that after spreading to a new victim, it would attempt to kill off CodeRed on that server, assuming the server was already infected, and then try download and apply Microsoft’s patch automatically.
However, as one commenter pointed out rather bluntly at the time:
“Another worm… lovely. […] Some companies [already] have routers failing because of bandwith issues dealing with CodeRed. You also forgot that many companies restrict the rights of users on machines. So, once they are infected, even if you download the patch it might not be installable. […] And what if the patching fails? You’ve just infected a machine with a worm that searches out other hosts to infect. […] How about making a tool that patches machines and isn’t a worm?”
Once again, the consensus was that if it looked like malware, walked like malware, and swam like malware, it was still malware even if it didn’t always quack exactly like malware.
But what if a programmer were to write an app that asked for permission first…
…and then used the very same techniques as the most hated malware samples of the day, such as scraping your address book and aggressively spamming itself to every single one of your contacts, like the notorious Melissa virus of 1999 and the LoveBug worm from 2000?
That was the trick used in 2002 by an ad-spreading app known as FriendGreetings.
If you opened the software on your computer, the first popup would confirm that the software was digitally signed, which was meant to reassure you that the company behind it, Permissioned Media, Inc., wasn’t trying to hide anything.
The app then said:
By many accounts, at least some security vendors that decided to block this software on account of its risky behaviour received ‘cease and desist’ letters from Permissioned Media.
The company demanded that they stop blocking the FriendGreetings software, not because it was ‘good malware’ but because they claimed it wasn’t technically malware at all, thanks to asking for permission, so that blocking it was detrimental to Permissioned Media’s business reputation.
Security software and threat blocking tools quickly learned to avoid this sort of legalistic fist-waving by introducing a class of software known variously and unpejoratively as potentially unwanted apps (PUAs, now probably the most widely used term), potentially unwanted programs (PUPs), or potentially unwanted software (you can work that one out for yourself).
Permissioned Media may have won the legal showdown around the word malware, but instead of establishing that ‘good malware’ could exist, we all ended up with a reminder that even apparently legitimate software could be risky enough to block outright anyway.
Another example of malware that was shipped as though it posed no risk at all comes from an unlikely source, namely the laptop vendor Lenovo.
Back in the mid-2010s, Lenovo bought into a product called Visual Discovery from a company by the peculiar name of Superfish, and preinstalled Superfish’s code on a number of its consumer notebook models.
The software was supposed to help with visual search matching, so if you were looking, say, for tables, you wouldn’t just be pointed to other sites that mentioned tables, but specifically to other sites with tables that looked like the ones you’d already showed an interest in.
Unfortunately, the Superfish software included a component that could crack open your encrypted web communications over HTTPS in order to increase the reach of its searches, and chose to do so by preinstalling what’s known as a Certificate Authority (CA) file into your browser, allowing the software to masquerade as any website it liked by minting its own fake HTTPS certificates in real time.
(That’s a dangerous but not unheard-of thing to do: many security tools work that way so they can peek inside your traffic for malware while you browse, but you or your sysadmin typically need to approve those special ‘supercertificates’ yourself.)
Superfish, astonishingly, not only installed its own site-snooping certificates, but also signed those certificates with a private signing key (one that could easily be extracted from any computer with the software installed) that was protected with a password that was easy to guess: it was just the name of the company from which Superfish had licensed the software it had then licensed on to Lenovo.
Simply put, anyone who had, or could acquire, a copy of the Visual Discovery software could make and digitally sign their own rogue cryptographic certificates not only to trick you into trusting a rogue website, but also into downloading and installing any rogue software you downloaded from it.
This wasn’t so much ‘carefully-constructed good malware’ as ‘carelessly-made malicious goodware’; fortunately it wasn’t long before the industry backlash convinced Lenovo to stop its Superfishing activities.
The company also ended up paying a modest financial penalty to the US Federal Trade Commission for its sins, and agreeing never to try anything of the sort in future.
That brings us to our last, and most troublesome, example, where we’re looking at the thorny problem of perfectly good software that ends up deliberately being used to serve the purpose of malware.
Rather than trying to frustrate detection and classification by creating malware that could also be used for good, so-called living off the land attackers (that’s the meaning of LOL in the subheading above) deliberately stick to legitimate software as much as they can, but sneakily use it for malevolent purposes.
(Bins, in case you’re wondering, is techie shorthand for binaries, which is an old-school but still widely-used jargon term for program files or executables.)
Living off the land, as you have probably figured out, is one of those paramilitary metaphors that are popular in cybersecurity these days, and brings to mind the idea of crack troops sneaking into enemy territory in advance of an invasion, and looking after themselves without relying on traditional lines of supply or other assistance from the rest of their armed forces.
LOLbins, therefore, are software tools that cybercrooks rely on for a cyberattack, but without needing to download, install or launch any programs that would obviously stand out as malicious, and without needing to call home for further malware samples that might attract attention.
For example, some ransomware attackers have avoided using their own file-scrambling malware to encrypt their victims’ files; instead, they use a popular archiving tool such as 7-Zip, which can create compressed and encrypted backups of files, folders, and even whole hard disks.
If you already have 7-Zip installed on your computer (many people do, because it’s a free and very handy utility), the crooks don’t even need to download it, but can simply live off the land, usurping software that you’ve already installed to use for good by using it for bad instead.
Threat hunters therefore need the insight and skill to tell the difference between a regular user launching 7-Zip for the laudable purpose of making a secure backup, and sneaky intruders using 7-Zip to create an unauthorised archive file for the purpose of stealing and scrambling your trophy data.
Other popular LOLbin tools on Windows include utilities that are commonly associated with improving or verifying security rather than compromising it, so that their use is unlikely to attract suspicion.
Examples include certutil
, usually used for managing digital certificates but also capable of numerous other handy utility functions such as encoding and decoding binary data as text so it’s easier to copy and paste; and bitsadmin
, usually used to configure Microsoft’s Background Intelligent Transfer Service for the positive outcome of fetching official updates, but open to abuse for quietly calling home to rogue sites.
The most popular LOLbin tool, however, is Microsoft’s own super-useful PowerShell
scripting engine, which allows programs in the form of simple, text-based script files to automate almost any aspect of Windows network management.
The PowerShell system is not only powerful enough to program all aspects of a full-blown ransomware attack in the PowerShell language alone, but also allows the program you want to run to be specified as a simple (albeit long) text string right on the command line, encoded in base64 form and therefore obscured from view:
This turns the code into so-called fileless malware, where the malicious code that runs is never saved to disk, making it hard to detect in the first place, and difficult or even impossible to recover for forensic analysis after an attack has happened.
Unfortunately, blocking PowerShell outright is as good as impossible, because a huge number of popular sysadmin tools, along with numerous utilities built into Windows, are themselves coded in PowerShell and rely on it being readily available.
Telling good software from bad is not as easy as you might think, and telling good software invocations from bad can be harder still.
On one hand, software that has shown up innocently and positively in your logs in the past might nevertheless be dangerous to keep around, making it a candidate for deletion.
On the other hand, not all potentially dangerous software is safe to delete, especially if it’s an official utility that your computer or your sysadmins can’t manage without.
As you can imagine, prompt and effective threat detection and response under these conditions can take more time and money than you have available, even if you have the necessary skills already and are able to keep those skills up to date.
Why not ask how SolCyber can simplify your cybersecurity journey?
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image by Jastro via Wikimedia, licensed under CC-BY-3.0.