Many legitimate products and services, from programming libraries to adversary simulation tools, can directly be used either for good or for bad.
If, when, and how to use them are therefore important moral and ethical matters that can’t be solved by automation or by one-size-fits-all rules and regulations.
If you started out doing internet programming last century, before online resources such as CodeProject.com (which, sadly, is in read-only mode these days), GitHub, and search engines were available, you probably wrestled with the weighty but frankly indispensable Unix Network Programming books of the late W. Richard Stevens.
Most software companies had well-worn copies of these books on their shelves, along with multiple copies of The C Programming Language book by Brian Kernighan and the late Dennis Ritchie, the original creator of C.
In case you’re wondering, the language C got its name because it was a refinement of B, which probably got its name because it was derived from BCPL, short for Basic Combined Programming Language. Today, we also have D, named because it was derived from C, and the very widely-used language C++, a direct successor to C, so named because the double-plus sign is C shorthand for increment, or “increase the value of”.
If you didn’t have Stevens’s books handy – and probably even if you did – you undoubtedly opened and perused the Unix man page about the sockaddr
data structure, short for socket network address specifier, any number of times:
Likewise for the manual page for creating a new socket
object to connect()
and send()
data, or to listen()
and accept()
connections, which had a dizzying array of settings to take into account:
The complexity, indeed the arcana, of network programming made it difficult to learn, hard to master, and worryingly easy to get wrong, which didn’t work out well for cybersecurity.
Ironically, a programming process that is conceptually and technically complicated enough to deter many would-be cybercriminals from abusing it…
…is not, as some people seem to assume, a useful security measure.
Firstly, at least some cybercriminals will have enough skill and persistence to figure it out, after which they can (and will) share or sell on their malware to others of similar mind.
Secondly, programming complexity is the enemy of correctness, leaving software vendors more likely to field defective programs that contain security bugs, albeit unintentionally.
Bugs of this sort, known in the jargon as vulnerabilities, are often exploitable by cybercriminals, a risk that is generally more significant in networking software that connects to or accepts connections from other people.
Thirdly, the criminals who write or buy in malware generally don’t have to care about the quality of their own code, as long as it works well enough to steal your data, to subject you to blackmail, to plunder your bank account, or whatever else the attackers have in mind.
In other words, badly-written goodware can put you at additional risk of malware.
And malware, which puts you at risk anyway, will put you at even more risk if it has bugs of its own.
Occasionally, careless bugs in malware work against the criminals by giving law enforcement a chance to counter-attack, and thereby collect useful intelligence, identify perpetrators, or take down malicious online services.
But badly-written malware, which can be considered the rule rather than the exception, very often makes an attack worse, for example by crashing systems, or by opening up yet more security holes for other criminals to exploit.
Back in November 1988, for example, 36 years ago to the month, the notorious Internet Worm was unleashed, also known as the Morris Worm after the name of its author, Robert Morris Jr.
Morris not only knew more than enough about Unix networking and network programming to create a fast-spreading worm, but also knew how to find and exploit a code execution vulnerability in a widely-used system network tool called fingerd
that was supposed to provide a community-friendly way of locating friends and colleagues online.
He abused the fingerd
service not to contact his friends, but to break into other people’s servers and command them to propagate the worm further, leaving the finger of distrust (if you will pardon the pun) pointing at his victims and not at himself.
And Morris’s malware had bugs of its own, notably that he underestimated how effectively it would spread, and programmed it to replicate so aggressively that it brought the entire internet to its knees with the traffic it generated.
The worm therefore affected everyone, even if their own networks and servers were invulnerable to it, or already protected against it.
Embarrassingly, Morris Jr. was the son of Bob Morris, Chief Scientist at the US National Security Agency. Morris Jr. was tracked down, arrested and charged as a cybercriminal, infamously becoming the first person convicted under the American Computer Fraud and Abuse Act, passed in 1986. He was fined $10,500 and sentenced to 400 hours of community service plus three years of probation. He learned his lesson and did not re-offend.
If having a high barrier of entry for network programming isn’t a useful preventative against cybercrime, what happens if we make programming easier, and significantly lower the barrier of entry?
Is there a risk that this might end up encouraging and enabling cybercrime?
When we make goodware easier to write, and thereby help even inexperienced or non-technical programmers to avoid certain common types of bug, which is surely a good thing…
….do we paradoxically, if inadvertently, lure more and more lawbreakers into having a crack at cybercrime, thus leading to more malware, too?
The popular scripting languages available today, including Perl, Python, Ruby, JavaScript, Bash, PowerShell, Lua and many others, turn the complex-looking probetelnet()
code that we extracted above from the Morris Worm into trivial one-liners.
In Lua, for instance, with a suitable add-on library plugged into the basic Lua coding engine, checking to see if telnet is open on port 23 on a remote computer of your choice, as Morris’s code did, could be as easy as shown below.
We ran each network probe as a single command, entered directly its own line at the Lua prompt:
Likewise, the complexity of setting up a fake web server that will accept encrypted connections via HTTPS and produce an entirely customized reply of your choice, with every byte of every header directly under your own control could be as straightforward as this short snippet:
The details of how to open a listening network socket and accept connections, how to handle multiple remote requests at the time time, how to load and use the certificate and private key files, how to perform a TLS handshake, what to do if certificate validation fails, how to deconstruct received HTTP requests to extract the command, headers and body data, what to do about errors, how to write a nicely-formatted event log…
…all those things are invisibly taken care of, leaving the programmer with little more to do than write a single function that’s called whenever a request comes in, decide which key and certificate to use, and start up the server.
Visitors will receive properly-formed replies (the code above just simulates an HTTP 451 error for every request), and the server will produce a convenient record of anyone who came looking:
Even acquiring the necessary TLS private key and HTTPS certificate (the files site-ec256.cert
and site-ec256.key
above) to give the site a legitimate look is a matter of running a single, easily-remembered command that automates the process.
Below, we used a a free online service called Let’s Encrypt and a simple, open-source script program called acme.sh
, one of numerous free tools available to care of ACME, short for Automatic Certificate Management Environment.
These tools automatically agree to the appropriate legal terms, generate the raw cryptographic data needed, and create a so-called certificate signing request that they submit to the certificate signing service on your behalf.
Until a few years ago this process required a mixture of time, technical aptitude, bureaucratic skill, and money.
The acme.sh
script correctly requested the necessary certificate files, including automatically validating our web server against Let’s Encrypt’s verification checks, in just 11 seconds.
The script also downloaded the generated key and certificate files, entirely free of charge:
When the ACME protocol and Let’s Encrypt burst into the cybersecurity ecosystem in 2015, many experts were immediately supportive, given that it removed the technical and financial barriers that discouraged many sites, such as those run by small businesses, charities and hobbyists, from adopting HTTPS.
As we explained in a handy article series earlier this year, HTTPS isn’t there merely to prevent criminals or spies from snooping on us as we browse.
Encrypted web connections also prevent the content we receive from being sneakily and undetectably modified along the way in order to to feed us scammy news or implant malware on our computers.
Nevertheless, a vocal minority in the internet community, including many website owners and operators, blasted Let’s Encrypt and its certificate automation as an unwanted extra step in their digital lives.
They declaimed that making certificates easy to get, and urging everyone to adopt HTTPS, was no better than a hassle for them, while at the same time making it trivial for cybercriminals to acquire an HTTPS “badge of honor” to legitimize their scamming and phishing sites.
This, of course, ignored the fact that criminals who wanted to could already easily get hold of HTTPS-protected web services for cyberattack purposes.
One way was to hack into someone else’s website (or to pay another criminal for access) and then serve up rogue content from an innocent-looking corner of that “secure” site until the site’s owner spotted and fixed the intrusion.
Another way was simply to sign up with a cloud-based web service that included umbrella HTTPS protection for all users, and then to default on payment once the scam campaign was over and the damage already done.
All the concerns we have looked at so far seem very modest when they’re compared with a more serious issue, namely, “What about software that is considered legitimate and useful, yet includes features that are commonly associated with malware?”
A well-known example is the incredibly useful tool Nmap, short for Network Mapper, which can probe networks of all shapes and sizes in order to find connected devices (including ones that aren’t supposed to be there), identify them and the operating systems they are running, and even to probe them to see which network services from what vendors are currently running on them.
In its own words:
Nmap (“Network Mapper”) is a free and open source utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. […] It was designed to rapidly scan large networks, but works fine against single hosts.
Here’s the nmap
program tested against a demonstration device provided by Nmap itself, showing how it can enumerate services and detect the remote operating system:
As you can imagine, what’s good for the goose is unfortunately good for the gander, and exploratory tools such as this are directly useful to cybercriminals.
Nmap even includes a wide range of options denoted in the documentation as ‘stealthy’, which make the software run more slowly and cautiously in the hope of avoiding detection by security-aware system administrators.
The ease with which software such as Nmap could be used for bad instead of for good sometimes leads to calls for tools of this type to be regulated, or even suppressed.
But banning Nmap would deny it to security-aware sysadmins, to whom it is astonishingly useful (especially at its price point of $0) for all the reasons listed above.
Nmap will help If you’re looking for rogue wireless routers, or forgotten servers, or laptops plugged in where they shouldn’t be, or for what’s known as shadow IT, where well-meaning colleagues set up their own unofficial servers and services, even though they know they shouldn’t.
And trying to regulate software of this sort to limit its distribution is unlikely to help either, as we found out in the 1990s when the US tried to regulate cryptographic software as if it were military munitions, only to realize that the same software was readily available from other countries anyway, thus disadvantaging the US software industry with no security upside.
If Nmap were restricted or controlled, users who could benefit from it might be reluctant to use it for fear of falling foul of the rules, while cybercriminals wouldn’t comply with those regulations (they’re criminals, after all), and would have no difficulty getting hold of the product unlawfully.
As an example of just how hard it is to keep control of software that’s sold as goodware but can be used as malware, consider the commercial product Cobalt Strike, one of numerous products in the Fortra cybersecurity stable.
Very loosely put, Cobalt Strike is malware in form and function, if not in a commercial and contractual sense, because it works exactly like zombie malware, and is openly advertised as Software for Adversary Simulations and Red Team Operations that replicates the tactics and techniques of an actual advanced adversary in a network:
Cobalt Strike gives you a post-exploitation agent and covert channels to emulate a quiet long-term embedded actor in your customer’s network. Malleable [command-and-control] lets you change your network indicators to look like different malware each time. These tools complement Cobalt Strike’s solid social engineering process, its robust collaboration capability, and unique reports designed to aid blue team training.
However, despite being an expensive, closed-source product with strict licensing rules and protections, the tool has become what the UK’s National Crime Agency (NCA) described, in June 2024, as the go-to network intrusion tool for unlawful attacks.
Following successful take-down operations against criminals who were using it, the NCA reported:
Since the mid 2010s, pirated and unlicensed versions of the software downloaded by criminals from illegal marketplaces and the dark web have gained a reputation as the ‘go-to’ network intrusion tool for those seeking to build a cyber attack, allowing them to deploy ransomware at speed and at scale.
Due to the range of tools, free training guides and videos that come with legal versions of the software, those adopting it for criminal use require low levels of sophistication and money.
This disruption activity represents more than two-and-a-half years of NCA-led international law enforcement and private industry collaboration to identify, monitor and denigrate its use.
Most goodware is obviously intended for good, and doesn’t open itself up to immediate and direct abuse as if it were malware.
A text editor, for instance, such as NOTEPAD, can be used with evil results, for example by modifying a vital configuration file to turn off important security protections.
But it would be an impossible leap to describe a program such as NOTEPAD as “malware-like”, or “cyberattack-capable” on those grounds.
Likewise, a lot of malware is unequivocally bad, so that whenever it is detected, it can and should be blocked and removed immediately, with no right of appeal or stay of execution.
A banking Trojan, for example, that is deliberately programmed to wait for you to connect to a specific web page and then spring into action to steal your password and MFA code out of memory as you type them in, has no legitimate use to justify its existence.
But many legitimate products and services, from programming libraries, through network scanners, all the way to adversary simulation tools, can directly be used either for good or for bad.
If, when, and how to use them are therefore important moral and ethical matters that can’t be solved by automation or by one-size-fits-all rules and regulations.
Here are some brief tips to help you get the human side of “potentially malicious goodware” right, and to ensure that you and your colleagues follow a path of Breaking Good, not Bad:
If you need help, be sure to choose a managed security security provider like SolCyber who will not only participate in your cybersecurity culture, but also actively help you to build and improve it.
Why not ask how SolCyber can help you do cybersecurity in the most human-friendly way? Don’t get stuck behind an ever-expanding convoy of security tools that leave you at the whim of policies and procedures that are dictated by the tools, even though they don’t suit your IT team, your colleagues, or your customers!
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Featured image of red hand by Julian Gentile via Unsplash.