Going public with bugs, vulnerabilities and other security research items, instead of suppressing them, has stirred up controversy for hundreds of years.
But security through obscurity, as it’s known, is a risky system to rely upon, because once your secret is out, you can’t turn it back into a secret again.
In Part 1, we described some of the things that fall under the category of Security through Obscurity, such as:
As we argued last week, it’s hard to see how any of these obscurities make your security worse if you do them in addition to some sort of security by design.
On the other hand, it’s easy to let the pursuit of obscurity distract you from the importance of taking security measures that remain effective even after the obscurity is stripped away.
And obscurity that is based simply on “let’s hope no one else figures it out” is easily blown apart.
Let’s start with the operating systems you use: these are easy to guess, because they almost certainly include some or all of Windows, Linux and macOS, together with a mixture of Android and iOS on your mobile devices.
Anyway, the software you use often gives you away, by blurting out details about itself and the system it’s running on when you connect outwards to other networks, or accept connections from other people.
For example, here’s the sort of announcement that Firefox and Chromium provide by default at the start of every request you make to every web server you visit:
And here’s a sample reply from a website, letting visitors know it’s running on an Elastic Cloud Server:
Obscurity is hard to maintain, and it only needs to be de-obscured once to be lost forever.
In Part 1, we revisited the salutary tale of the DVD industry’s Content Scramble System (CSS), a cryptographic anti-piracy and region-locking algorithm that was never disclosed for public scrutiny, but relied on being suppressed from scrutiny in the form of a trade secret.
The confidential code was extracted from a legally-purchased DVD player, circulated widely on the internet, and was actively used in free software called DeCSS
, which made it possible for users to backup their legally-purchased DVDs, to watch them in software on open-source operating systems, and to analyse and critique the now-unsecret software.
The CSS Association made a lengthy (and understandably unpopular) attempt to use the legal system, including the criminal courts, to ‘re-obscure’ the published code, in what seemed to be an effort to bury it once more under a blanket of trade secrecy.
As we found last week, that simply didn’t work, and arguably served only to draw much more attention to the availability not only of the leaked code, but also to the existence of the DeCSS
tools, which admittedly did simplify DVD piracy.
A cryptography enthusiast called Phil Carmody went out of his way to convert the very source code that the CSS Association thought should be illegal to publish, perhaps even to possess, into a mathematically interesting prime number that could hardly be considered objectionable, let alone unlawful:
Carmody’s idea was to draw attention to the technical folly of relying on obscurity for online safety and security, especially in the field of cryptography.
You can’t ban a number, at least in any general sense, and given that any source code file can be represented as a number, it doesn’t make a lot of sense to try to demand that leaked source code be turned back into a secret once it has become known.
Carmody’s prime number project wasn’t the first time that security through obscurity had been thrown into the cryptographic spotlight.
That honour goes to Dutch cryptographer Auguste Kerkhoffs, a polymath whose day job was Professor of German at a French university.
In a seminal 1883 text entitled La Cryptographie Militaire (Military Cryptography), published in two parts, he offered the strong opinion that in any security system involving any sort of secret key, the secrecy of the system should depend only on the secrecy of that key.
The secrecy of the system itself, or of the algorithm, or of the device used to perform the encryption, should not be required.
After all, keys are designed so that everyone can have their own, and so they can be changed as needed; in contrast, the system itself is designed to be the same for everyone, as a matter of convenience, reliability and practicability.
Kerkhoffs argued that it was dangerous to assume that the enemy (or, in the CSS case, a handful of inquisitive youngsters) would be unable to discover how your system worked.
In fact, a wise security designer should assume that the Bad Guys have a copy of it right from the start.
Kerkhoff’s advice, now universally accepted and followed by reputable cryptographers, was to design the system so its security did not depend on the sort of secrecy that would be too complex to plan for in the first place, and impossible to maintain as a result.
But security through obscurity isn’t relevant only to cryptography.
Thirty years earlier, in 1851, renowned American locksmith Alfred Hobbs famously travelled to the Great Exhibition at Crystal Palace in London, England.
Hobbs successfully and dramatically picked locks from the British Empire’s best-known lock-makers, creating quite a social stir (and not coincidentally using his sense of showmanship to close numerous sales).
Hobbs firmly believed that keeping known vulnerabilities secret was a bad idea, even though his detractors claimed that revealing them would do little or nothing to help customers, who didn’t need to pick their own locks because they had the key, but would enable a wave of crime by teaching crooks how to break in.
Hobbs subsequently wrote, in a book entitled Locks and Safes – The Construction of Locks, a surprisingly prescient introduction arguing that:
“Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. […]
[S]urely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.”
Hobbs was well ahead of his time, because in the modern era of cybersecurity we have ended up in a position of which Hobbs would surely approve, given that a significant number of experts have embraced what’s known as the responsible disclosure of security holes and vulnerabilities.
The idea is that the discoverers of security bugs, as well as asking to be paid finders’ bounties, are allowed to go public with the details of their work, and thereby to blow their own trumpets and to advertise their bug-hunting abilities…
…but they should agree a reasonable ‘silent period’ with the provider of the affected product or service first.
During this silent time, they give the provider full and frank access to their findings so far, in return for an agreed date on which they can talk as openly as they like about the issues involved, and indeed be the first to do so.
The theory is simple: vendors get a fair chance to fix problems that, unlike in Hobbs’s time, often aren’t yet known to the rogues, but unscrupulous or incompetent vendors don’t get to sweep issues under the carpet.
Vendors are kept honest, the proponents of responsible disclosure insist, because there is a clear date agreed on which information about the bug will be revealed on the grounds that it is “necessary to give fair play to those who might suffer by ignorance,” as Hobbs unbeatably described it nearly 200 years ago.
In summary, then, security through obscurity is fine if it represents nothing more than a disinclination on your part to tell everyone exactly what you’re doing, assuming that your underlying security doesn’t depend on it.
But security through obscurity as a means of pretending that you have a good attitude to security, or as a way of hiding any lapses that your poor practices may have caused?
That sort of attitude is no longer considered acceptable by most if not all cybersecurity experts.
As Hobbs went on to write:
It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties.
The words above can be applied to all aspects of online life, if not to life in general, but they are especially pertinent to cybersecurity, where ignorance is about as far from bliss as you can get.
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!