How can attackers hack you if they don’t know what to look for or how to find it, and if it wouldn’t mean anything to them anyway? If you make everything in your network opaque, what could ever go wrong?
Unfortunately, secrets come with a deep and troublesome problem, and we’re going all the way back to world-renowned scientist and engineer Benjamin Franklin to remind ourselves what it is.
Secrecy sounds as though it should be the centrepiece of cybersecurity, perhaps even sufficient all on its own to keep your online operations safe.
After all, if the cybercriminals don’t know what network software you’re using, which operating systems it’s running on, what threat detection tools you’re using, which network ports it’s communicating over, what format your data is in, which encryption algorithms are in play, who’s viewing that data, how and where they’re doing so, or what they’re even doing with it…
…what could possibly go wrong?
How can attackers hack you if they don’t know what to look for or how to find it, and if it wouldn’t mean anything to them anyway?
And why would you give out security hints or signals that you don’t need to?
For example, you probably don’t have your home address printed on your front door keyring, just in case you lose it.
An address attached to your key would tell anyone who found it exactly which door it opened.
That’s an example of what’s often referred to as security through obscurity, where you deliberately shroud things that are intrinsically risky (such as losing your house key) in a layer of mystery (such as by not tagging it with your address), in the hope of preserving your safety and security anyway.
But secrets come with a deep and troublesome problem.
Benjamin Franklin famously summarised it back in the 18th century, in his remarkable annual publication Poor Richard’s Almanack: “Three may keep a secret, if two of them are dead.”
As Franklin pithily reminds us, security that depends upon obscurity is no security at all once its obscurity breaks down: when you delete the obscurity, you unavoidably delete the security as well.
Leaving your home address off your key fob doesn’t make the security of your front door any worse, and it may buy you enough time to get the lock changed if ever you do lose track of your keys, so you can chalk it up as a modest and inexpensive benefit.
But keeping a spare copy of your door key hidden somewhere on your property in case you lose your own key is an example of security by obscurity that works in the other direction.
Having two copies of your key, one of which is hidden under the second plant pot to the left (because you wouldn’t be so reckless as to leave it under the doormat), clearly increases the risk of compromise, because criminals can now acquire a genuine key to your house in two different ways.
They could find your regular key if you lose it, and somehow figure out which lock it fits, which is the risk we’ve already described.
Or they could find out the secret of your hidden key, and use that one instead, which adds an entirely new risk without reducing your existing risk in any way.
Worse still is that although you’re likely to notice fairly quickly if you lose your own key, and to take preventative precautions such as changing the lock, you’re unlikely to know proactively that someone has figured out the location of your secret hiding place.
You’re more likely to find out reactively that the obscurity protecting your security has broken down by returning home to find that your house has been burgled and that lots of your important stuff has gone missing.
Loosely speaking, the best you can hope to get out of security through obscurity is that you won’t be any worse off, but the worst you can expect is that you’ll actually be less secure, and more likely to suffer a security breach as a result.
An intriguing tale of security through obscurity can be found in a technology known as CSS, which in this story doesn’t stand for cascading style sheets, as you might think (that sort of CSS is how websites manage their look and feel), but for the modestly ungrammatical Content Scramble System.
The scrambling flavour of CSS was developed in the 1990s by electronics giants Matsushita and Toshiba and pitched as providing “reasonable security for content on DVD discs and thereby […providing] protection for such copyrighted content against unauthorised consumer copying.”
The system was essentially a cryptographic algorithm that used secret keys built into DVD players made by vendors who had paid the not-insignificant membership fees to join the DVD Copy Control Association (CCA), together with keys burned onto each protected disc.
CSS aimed to provide features such as copy protection and region protection, so that discs offered cheaply in one part of the world wouldn’t work in DVD players bought in other countries where consumers would tolerate higher retail prices.
Discs didn’t have to be CSS protected, and DVD players didn’t have to include the CSS software, but protected disks generally couldn’t be used in DVD players from non-member vendors.
This, unsurprisingly, prevented open-source operating systems such as Linux or the BSDs from directly supporting the playback of protected DVDs, because the CSS algorithm was a trade secret.
CSS made it impossible to ‘rip’ legally-purchased DVDs, so they couldn’t be backed up as disk files or watched on a computer from an image of the disc.
Members of the CSS association were obliged to designate a single individual in their business to receive and take care of information deemed confidential and highly confidential, and to protect the CSS ecosystem by using the principle of security through obscurity.
You can therefore probably imagine what happened: a bunch of young but exuberant hackers (and I am using the word in an entirely neutral sense here) blew away the obscurity by extracting the algorithm from a commercial DVD player that had not shrouded the CSS code as zealously as the DVD CCA would have liked.
In 1999, they produced a reimplementation of CSS that they used in software called DeCSS
that could convert scrambled DVDs into discs or disk images that could be played on any DVD player, or by software on any operating system.
Although their original DVD ripping program was closed-source and for Windows only, the CSS code soon found its way onto the internet, and that was the end of the obscurity of CSS, and thus its secrecy, and with it the end of any “reasonable security” that the CSS system claimed to provide.
Only one of the programmers was known (the others remained anonymous), a Norwegian teenager named Jon Lech Johansen, who was dragged through the Norwegian criminal courts for several years, apparently following a complaint from the DVD CCA, until the case and the complaint were finally abandoned in 2004.
Presumably, the CCA ultimately decided that it was neither a convincing advertisement for its own technical prowess, nor a popular social stance, to railroad a young programmer for ‘defeating’ a so-called security system that turned out not to be up to scratch anyway.
Analysis of the leaked DeCSS
code revealed cryptographic weaknesses that made it about 16,000,000 times weaker than it was supposed to be, allowing DVD keys to be cracked automatically in just a few minutes, even back in 1999, without needing any data extracted from an insecure DVD player.
Ripping away the obscurity of the CSS algorithm may have made DVD piracy and region-switching easier, but it also served to educate the community, not only the members of the CCA but also the owners of legitimately-purchased discs, that the CSS system didn’t really live up to its claims.
The cybersecurity and open-source community reacted with a fascinating mixture of wit and vexation.
Firstly, the suggestion that it should be illegal merely to publish source code that showed how a so-called security system worked was considered draconian; secondly, the idea of suppressing such code through the courts even after it had become widely available was considered counterproductive.
Legalistic machinations intended to return an algorithm to obscurity in the hope of somehow re-establishing its ‘security’, critics argued, were a fool’s errand that was not merely arrogant but dangerous.
Cybersecurity agility, it scarcely needs saying, demands that we build a collective ability to move forward quickly from insecure policies and procedures.
We need to leave defunct code and dysfunctional protection behind as soon as we reasonably can, rather than using every means at our disposal, including non-technical methods such as the courts, to come up with reasons to let us carry on as we were.
One hacker’s reaction to the DeCCS
lawsuits was particularly amusing.
An unadorned re-implementation of the CSS source code in C can be compressed to under 2000 bytes using gzip
(a popular data compression tool at the time), and gzipped data includes a built-in length counter, so it can still reliably be decompressed even if additional bytes are tacked on the end.
An enterprising programmer called Phil Carmody therefore appended some zero bytes to the compressed code, and considered the resulting byte string as an extremely large positive integer with thousands of digits, including a bunch of zeros at the end he could now modify at will.
All numbers in base 256 (that’s when each full byte of 8 bits is treated as if it were a ‘digit’, so each number is an exact multiple of 8 bits long) that end with the byte zero are even, so they can’t be prime numbers.
The only even prime number is 2, and all multiples of 2 are divisible by 2, so that all subsequent even numbers are non-prime, or composite in mathematical jargon.
Carmody added 1 to create an odd number, and then proceeded to test that number and every subsequent odd number to see if it might be prime. (Testing for primality is not a formal proof, because it doesn’t actually try every possibility to eliminate all possible divisors, but Carmody used a probabilistic algorithm that is widely used by cryptographic code, and that is accepted as statistically safe enough in practice to be considered equivalent to a proof.)
He then filtered his ‘probable primes’ with a much more intensive algorithm known as ECPP, short for elliptic curve primality proving, to find a number for which he could provide a formal proof of primality: evidence that would be sufficient to satisfy even a pure mathematician, not merely to convince a browser trying to generate a one-time cryptographic key.
Carmody then published this prime number on a website dedicated to notable prime numbers, based on the criterion that it was, at the time, the tenth-largest number ever proved prime using ECPP, making it worthy of being known:
You can see the meaningful irony here.
What Carmody published online was merely a number, but a sufficiently interesting number that it was reasonable, useful, and even important, to make it publicly available in a list of large primes.
It was also, thanks to its primality and the way it was proved prime, a number that was interestingly and informatively connected with two aspects of cryptography: the reliable generation of prime numbers, which is vital in contemporary public-key cryptography; and the use of elliptic curves, increasingly popular for cryptographic purposes today because they are faster and more compact than the older RSA algorithm for the same level of security.
Yet this number also just happened to represent the CSS source code, which some people thought should be illegal to publish, or perhaps even to have a copy of, simply so they could prolong the obscurity of a cryptographic algorithm that hadn’t stood up to public scrutiny once it became known.
But how could a number possibly be illegal?
(It couldn’t, and you can find it online to this day, though it’s more of a curio now, famous for being the first ‘illegal prime’ rather than for being a particularly large ECPP-proved prime any more, given how much faster computers are today.)
More importantly, how could security through obscurity ever replace security by design, with or without involving the legal system?
That’s a rhetorical question, of course, because security by obscurity is never satisfactory on its own.
Even if security by obscurity is layered on top of a system that was designed to be secure in its own right, your overall security may ultimately be reduced, because adding additional complexity (especially complexity intended to create an unusual or unexpectedly opaque configuration) never makes a system easier to review for correctness.
Now read Part 2, where we dig further into the history of security through obscurity, and the intriguing controversies it has created over many years…
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Photo credit: Photo by Michael Dziedzic on Unsplash
By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.