Security through Obscurity: When secrecy alone is not enough (Part 2 of 2)

Security through Obscurity: When secrecy alone is not enough (Part 2 of 2)

Paul Ducklin
Paul Ducklin
9 min read
Share this article:

Going public with bugs, vulnerabilities and other security research items, instead of suppressing them, has stirred up controversy for hundreds of years.

But security through obscurity, as it’s known, is a risky system to rely upon, because once your secret is out, you can’t turn it back into a secret again.  

Keeping things quiet

In Part 1, we described some of the things that fall under the category of Security through Obscurity, such as:

  • Keeping quiet about which operating systems and software you use on your network.
  • Listening for incoming connections on non-standard TCP ports.
  • Not engraving your home address on the keyring to which your front door key is attached.

As we argued last week, it’s hard to see how any of these obscurities make your security worse if you do them in addition to some sort of security by design.

On the other hand, it’s easy to let the pursuit of obscurity distract you from the importance of taking security measures that remain effective even after the obscurity is stripped away.

And obscurity that is based simply on “let’s hope no one else figures it out” is easily blown apart.

Let’s start with the operating systems you use: these are easy to guess, because they almost certainly include some or all of Windows, Linux and macOS, together with a mixture of Android and iOS on your mobile devices.

Anyway, the software you use often gives you away, by blurting out details about itself and the system it’s running on when you connect outwards to other networks, or accept connections from other people.

For example, here’s the sort of announcement that Firefox and Chromium provide by default at the start of every request you make to every web server you visit:

And here’s a sample reply from a website, letting visitors know it’s running on an Elastic Cloud Server:

Obscurity is hard to maintain, and it only needs to be de-obscured once to be lost forever.

Obscurity laid bare

In Part 1, we revisited the salutary tale of the DVD industry’s Content Scramble System (CSS), a cryptographic anti-piracy and region-locking algorithm that was never disclosed for public scrutiny, but relied on being suppressed from scrutiny in the form of a trade secret.

The confidential code was extracted from a legally-purchased DVD player, circulated widely on the internet, and was actively used in free software called DeCSS, which made it possible for users to backup their legally-purchased DVDs, to watch them in software on open-source operating systems, and to analyse and critique the now-unsecret software.

The CSS Association made a lengthy (and understandably unpopular) attempt to use the legal system, including the criminal courts, to ‘re-obscure’ the published code, in what seemed to be an effort to bury it once more under a blanket of trade secrecy.

As we found last week, that simply didn’t work, and arguably served only to draw much more attention to the availability not only of the leaked code, but also to the existence of the DeCSS tools, which admittedly did simplify DVD piracy.

A cryptography enthusiast called Phil Carmody went out of his way to convert the very source code that the CSS Association thought should be illegal to publish, perhaps even to possess, into a mathematically interesting prime number that could hardly be considered objectionable, let alone unlawful:

Carmody’s idea was to draw attention to the technical folly of relying on obscurity for online safety and security, especially in the field of cryptography.

You can’t ban a number, at least in any general sense, and given that any source code file can be represented as a number, it doesn’t make a lot of sense to try to demand that leaked source code be turned back into a secret once it has become known.

Kerkhoffs and Hobbs

Carmody’s prime number project wasn’t the first time that security through obscurity had been thrown into the cryptographic spotlight.

That honour goes to Dutch cryptographer Auguste Kerkhoffs, a polymath whose day job was Professor of German at a French university.

In a seminal 1883 text entitled La Cryptographie Militaire (Military Cryptography), published in two parts, he offered the strong opinion that in any security system involving any sort of secret key, the secrecy of the system should depend only on the secrecy of that key.

The secrecy of the system itself, or of the algorithm, or of the device used to perform the encryption, should not be required.

After all, keys are designed so that everyone can have their own, and so they can be changed as needed; in contrast, the system itself is designed to be the same for everyone, as a matter of convenience, reliability and practicability.

Kerkhoffs argued that it was dangerous to assume that the enemy (or, in the CSS case, a handful of inquisitive youngsters) would be unable to discover how your system worked.

In fact, a wise security designer should assume that the Bad Guys have a copy of it right from the start.

Kerkhoff’s advice, now universally accepted and followed by reputable cryptographers, was to design the system so its security did not depend on the sort of secrecy that would be too complex to plan for in the first place, and impossible to maintain as a result.

But security through obscurity isn’t relevant only to cryptography.

Thirty years earlier, in 1851, renowned American locksmith Alfred Hobbs famously travelled to the Great Exhibition at Crystal Palace in London, England.

Hobbs successfully and dramatically picked locks from the British Empire’s best-known lock-makers, creating quite a social stir (and not coincidentally using his sense of showmanship to close numerous sales).

Hobbs firmly believed that keeping known vulnerabilities secret was a bad idea, even though his detractors claimed that revealing them would do little or nothing to help customers, who didn’t need to pick their own locks because they had the key, but would enable a wave of crime by teaching crooks how to break in.

Hobbs subsequently wrote, in a book entitled Locks and Safes – The Construction of Locks, a surprisingly prescient introduction arguing that:

“Many well-meaning persons suppose that the discussion respecting the means for baffling the supposed safety of locks offers a premium for dishonesty, by showing others how to be dishonest. This is a fallacy. Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery. […]

[S]urely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to be the first to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance.”

Responsible disclosure

Hobbs was well ahead of his time, because in the modern era of cybersecurity we have ended up in a position of which Hobbs would surely approve, given that a significant number of experts have embraced what’s known as the responsible disclosure of security holes and vulnerabilities.

The idea is that the discoverers of security bugs, as well as asking to be paid finders’ bounties, are allowed to go public with the details of their work, and thereby to blow their own trumpets and to advertise their bug-hunting abilities…

…but they should agree a reasonable ‘silent period’ with the provider of the affected product or service first.

During this silent time, they give the provider full and frank access to their findings so far, in return for an agreed date on which they can talk as openly as they like about the issues involved, and indeed be the first to do so.

The theory is simple: vendors get a fair chance to fix problems that, unlike in Hobbs’s time, often aren’t yet known to the rogues, but unscrupulous or incompetent vendors don’t get to sweep issues under the carpet.

Vendors are kept honest, the proponents of responsible disclosure insist, because there is a clear date agreed on which information about the bug will be revealed on the grounds that it is “necessary to give fair play to those who might suffer by ignorance,” as Hobbs unbeatably described it nearly 200 years ago.

In summary

In summary, then, security through obscurity is fine if it represents nothing more than a disinclination on your part to tell everyone exactly what you’re doing, assuming that your underlying security doesn’t depend on it.

But security through obscurity as a means of pretending that you have a good attitude to security, or as a way of hiding any lapses that your poor practices may have caused?

That sort of attitude is no longer considered acceptable by most if not all cybersecurity experts.

As Hobbs went on to write:

It cannot be too earnestly urged that an acquaintance with real facts will, in the end, be better for all parties.

The words above can be applied to all aspects of online life, if not to life in general, but they are especially pertinent to cybersecurity, where ignorance is about as far from bliss as you can get.

What to do?

  1. Review your own systems for any areas where your security depends on obscurity. You are not obliged to make your security arrangements public, but you should assume that they are widely known anyway, not least because many people will already be privy to them. Unlike cryptographic keys, they cannot rapidly be changed to invalidate that knowledge.
  2. Embrace cybersecurity agility. Adopt working practices that make it easy and quick to change processes and procedures that you discover you can no longer trust. If you figure out that you’re doing something risky, outdated or ineffective, assume that the cybercriminals have figured it out, too.
  3. Patch early, patch often. Even when patches arrive in advance of the rogues knowing how to exploit the vulnerabilities they fix, you should assume that the patches themselves will attract the interest and attention of cybercriminals. Hobbs’s assertion that the rogues are always ahead no longer applies, but that is no excuse for letting yourself fall behind when you don’t need to. Consider bringing in a Managed Security Service Provider who will help you ensure that you really have patched, and not neglected some far-flung part of your IT estate.
  4. Practise how you will respond if you suffer a breach of your own. You will often be compelled by modern regulations to disclose breaches anyway, but any attempt to follow a path of obscurity instead of open honesty will provoke more questions amongst your customers than it will answer. Remember that trust must be earned; it cannot be assumed.
  5. Avoid doing business with companies that have a history of poor or disingenuous cybersecurity disclosures. Assume any organisation that tolerates obscurity as an official security response is either trying to suppress bad news that it already knows and ought to be telling you, or trying to avoid telling you that it doesn’t actually know what happened but doesn’t want to admit it.

More About Duck

Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!

Paul Ducklin
Paul Ducklin
Share this article:

Table of contents:

The world doesn’t need another traditional MSSP 
or MDR or XDR.

What it requires is practicality and reason.

Related articles

The world doesn’t need another traditional MSSP or MDR or XDR.
What it requires is practicality and reason.

And security that won’t let you down. It's time to put an end to the cyber insanity once and for all.
No more paying for useless bells and whistles.
No more time wasted on endless security alerts.
No more juggling multiple technologies and contracts.

Follow us!


Join our newsletter to stay up to date on features and releases.

By subscribing you agree to our Privacy Policy and provide consent to receive updates from our company.

SolCyber. All rights reserved
Made with
Jason Pittock

I am interested in
SolCyber XDR++™

I am interested in
SolCyber MDR++™

I am interested in
SolCyber Extended Coverage™

I am interested in
SolCyber Foundational Coverage™

I am interested in a
Free Demo