If quantum computers turn out to be a damp squib, a triumph of marketing over engineering, does that mean we can give up on the cryptographic changes we’ve been worrying about for years?
In Part 1 of this article, we dug into the fascinating history of quantum computers, looked at the the bullish promises currently being made for them, and deconstructed some of the misunderstandings that have attached themselves to quantum computing as if they were facts.
Here, we look at cryptographic cracking more generally, and why we need to be ready to change, whether quantum computers fulfil their spectacular promises or not.
Classical computers, as quantum fans typically refer to them, as though they were relics of a bygone age, have evolved dramatically over the past few decades.
We’ve gone from what we called 8-bit computers, such as the Apple II of the mid-1970s, through 16-bit and 32-bit systems such as the IBM PC and its 386-based successors, to today’s laptops and phones, which almost all fall into the 64-bit category.
By this, of course, we don’t mean that the computer has only 64 bits (8 bytes) to work with, but that 64 bits is the basic unit of data that the computer’s central processing unit, or CPU, can work on at a time.
In general, because each bit can have just two different values, usually written as 0 or 1, N bits can be in 2×2×···[N times]···×2×2 different states, written as 2N, and can therefore represent up to 2N distinct values.
As N increases, 2N increases even more dramatically, because it’s an exponential, so the power and speed of computers generally increases along with their so-called ‘bitness’, if only because they can crunch more data in each individual step of machine code.
An 8-bit Apple II computer could, in theory, add together about a million numbers between 0 and 255 each second, but a contemporary 64-bit laptop can typically add together 4 billion numbers a second, each between 0 and a whopping 18,446,744,073,709,551,615, which is 264-1.
Today’s CPUs typically have between 8 and 16 separate sub-processors, or cores, that with careful programming can each achieve that rate of calculation in parallel, acting almost as if they were 8 or 16 individual computers.
In fact, thanks to internal optimisations known as instruction re-ordering and micro-operations, each core can often keep busy with multiple 64-bit calculations in parallel, too.
This means a single CPU can have several cores each finishing off several calculations in each processor cycle.
And with processor speeds greater than 1GHz, or one billion cycles per second, each cycle takes less than a billionth of a second, or a nanosecond.
For example, the Windows assembler code below does 16 billion iterations, each of which handles a 64-bit machine code addition, an XOR
, a subtraction, a comparison and a conditional JAE
(jump if above or equal to) instruction:
In a Windows virtual machine running on a mid-range Linux laptop, the loop takes just over 7 seconds to run, during which time the VM completes 48 billion integer calculations, 16 billion 64-bit comparisons, and 16 billion code branches that send the program backwards in memory to repeat the loop until the counter gets down to zero.
Amazingly, even if we rewrite the program to shuffle its values repeatedly in and out of memory instead (here, we are using a chunk of temporary memory on the stack, indexed by the base pointer register, known as rbp
), the on-chip cache in a modern CPU allows commonly-used memory locations to operate at register-like speeds:
The modern laptop completed 16 billion loops in just under 8 seconds, thus taking less than half a nanosecond per loop.
An original Apple II would need to execute many more instructions each time round (not least because 64-bit ADDs and XORs have to be implemented as eight consecutive 8-bit ADDs and XORs), and would probably take at least 50 microseconds per loop.
That makes the laptop about 100,000 times faster with just one core in use, or 1.6 million times faster if you use all its cores to run 16 copies of the loop at the same time.
As far as encryption and decryption are concerned, this enormous and continuous speedup over the past 50 years has forced us to re-evaluate the security and crackability of our cryptographic tools many times.
In the 1970s, for instance, secret-key cryptography, also known as symmetric encryption because you use the same key for locking and unlocking your data, was largely standardised around DES, short for data encryption standard, which used keys of 56 bits.
Intriguingly, the early versions of DES could handle keys from 48 bits to 128 bits.
But the largest keys made software implementations on the processors of the day very slow, and made hardware implementations bulkier and more expensive, while the smallest keys were considered too short for safety.
A key length of 56 bits was the final compromise, for a total 256 different possible keys, although some cryptographers suggested this size was chosen because it would be just about within range of the US intelligence services in an emergency, but out of range for almost everyone else.
By the turn of the millennium, however, 56-bit keys weren’t good enough for anyone, because they were within range even of modestly-sized companies.
To prove the point, the Electronic Freedom Foundation designed and built a specialised standalone computer dubbed Deep Crack for less than $250,000.
With this system, they could crack individual messages in as little as one day by trying out every possible DES key:
As a result, DES was sidelined and replaced with AES, the advanced encryption standard.
AES was selected following an open, global competition that ran for several years, and attracted cipher designs from top cryptographers plus detailed mathematical review and cryptanalysis from experts around the world.
The public competition meant that by the time AES was chosen, with much harder-to-crack key sizes of 128 bits, 192 bits, or 256 bits (32 bytes), there was widespread confidence in and support for the new system.
Even more spectacular changes have taken place in some of the cracking challenges associated with what’s known as asymmetric encryption or public-key cryptography, which we need for securing online transactions where we can’t use the same key at each end of the link.
Public-key encryption was invented in the early 1970s by the intelligence services in Britain, who gave it the unusual name non-secret encryption (NSE), meaning not that it was unsafe for keeping secrets, but that there was no need to meet up ahead of time to share a secret key:
Curiously, the British decided to keep these algorithms to themselves, and not to develop them for their own use, even after two groups of academic researchers in the US independently reinvented NSE a few years later and then patented their algorithms.
The jargon term asymmetric neatly reflects the fact that there are different keys for locking and unlocking, and public-key cryptography reminds us that it’s OK to make one key public, as long as you remember to keep the other key strictly private:
The idea is that your public key can be used by anyone, either to lock up data that only you can later unlock, making it safe to send even over a network that’s being snooped, or to verify a digital signature that only you could have created before you sent the data, or both.
As you can imagine, a system of this sort needs a mathematical basis that makes it comparatively easy to compute matching pairs of public and private keys, or else the algorithm would be unusable.
But it must also be too hard to work backwards to figure out the private key from the public key, or else the algorithm would be insecure.
The first widely-used public-key encryption system was RSA, a name that comes from the last names of Ron Rivest, Adi Shamir, and Leonard Adleman, who came up with it.
RSA’s solution was to make use of arithmetic involving oprime numbers, positive integers that can only be divided by themselves and by 1.
Testing whether a number is prime or not can be done quickly using statistical methods, so it’s comparatively easy to generate two large prime numbers P
and Q
and multiply them together to produce what’s called a semiprime, N
, meaning that it has just two factors.
Likewise, it’s easy to test that N is not prime…
…but that’s not the same as finding out the actual factors P and Q, a task that gets exponentially harder as the numbers involved get bigger.
You can therefore tell everyone N, so they can use it as your public key, safe in the knowledge that the only currently known way they could work backwards to extract your private key (P,Q) would be to split N back into P and Q by factoring it.
The naive way to do so (there are some massive speedups available, but the underlying algorithmic concept remains much the same) is to try dividing N by all possible factors up to the square root of N, until you find a magic number that divides N evenly, which tells you the value of P, and obviously reveals Q at the same time, because if N = PQ then N/P = Q:
However, as we mentioned above, an X-bit number can represent 2X different values, so a 2048-bit RSA key, which is the minimum size recommended today, has a square root that’s 1024 bits long.
Even if you only needed to test one in every googol (1 followed by 100 zeros, or a 333-bit binary number) of those 1024-bit numbers to see if they were the elusive factor P, you’d still need to check out more than 2691 different possible factors.
And 2691 is not only greater than the number of grains of sand on every beach on earth, or of water molecules in all our oceans, but far, far larger than the number of protons in the observable universe itself.
Indeed, back in 1997, when RSA was new, the late, great mathematical populariser Martin Gardner wrote in Scientific American:
“Rivest estimates that the running time required [to crack the 125-digit or 126-digit product of two 63-digit primes] would be about 40 quadrillion years!”
That’s about 350 million million million hours.
A few days ago, however, I generated two 63-bit primes of my own, multiplied them to create my own RSA-style semiprime N, and used a popular open-source factoring toolkit called CADO-NFS to attack the number using only my own laptop.
The process took under two hours, thanks to multiple cores, much faster instruction cycle times, more bits-per-cycle of processing, and four decades of research into faster algorithms for integer factoring:
Back in 1977, Rivest recommended 200-digit (664-bit) numbers as RSA semiprimes, or 100-digit (332-bit) values for P and Q, to give plenty of ‘just in case’ cybersecurity headroom on top of the 125-digit size he’d already tagged as uncrackable within the lifetime of the universe.
Today, 1024-bit or 1536-bit primes, giving 2048-bit or 3072-bit keys, are recommended by the US National Institute of Standards and Technology (NIST), to ensure that we all stay ahead of the game.
But what if quantum computers, which we discussed in Part 1, provide even more dramatic and sudden increases in the cracking speed that attackers can bring to RSA keys?
As we described last week, a quantum algorithm devised in 1994 by Peter Shor, and named after him, could theoretically make a small group of selected mathematical tasks exponentially faster to solve.
Shor’s algorithm just happens to cover the very problem we touched on above: splitting unimaginably huge semiprime numbers N into their factors P and Q, thereby theoretically undermining RSA encryption.
If we suddenly needed to increase the size of our RSA keys to stay ahead of the attackers, for example by making them 16 times bigger, our encryption code would run very much more slowly and we’d need more network bandwidth when setting up encrypted connections, but we could probably take the change in our stride.
But if we needed to increase the size of our keys in an exponential way, for example by making them 216 times bigger (a 65,536-fold increase), we’d be in trouble.
We’d need multi-megabyte network keys, and connection setup would grind to a halt, because generating new RSA keys for fresh transactions takes longer and longer as the keys get larger, at least in part because prime numbers become sparser as they get bigger, so you need more and more tries to find one at random.
If, and it is admittedly still no more than an ‘if’, quantum computers capable of cracking today’s RSA and elliptic curve encryption should appear in the next 47 years, we won’t simply be able to keep switching to bigger keys, in the way that we moved from a recommended 664-bit key size in 1977 to 2048-bit or 3072-bit keys in 2024.
We’ll need new classes of cryptographic algorithm that are based on different types of ‘hard-to-reverse’ mathematics, so that Shor’s algorithm, and any other fast quantum algorithms that might be devised in the interim, can’t be used as leverage against them at all.
And even if quantum computers that work reliably enough to pull off this sort of exponentially faster cracking can’t be built, we may nevertheless find ourselves facing abrupt improvements in cracking speeds anyway.
Without any qubits or quantum algorithms, my mid-range ‘classical computing’ laptop offered me an RSA cracking speed that was 200 million million million times faster than Ron Rivest’s 1977 estimate, just by using the freely available CADO-NFS factoring software.
French quantum researcher Xavier Waintal has published a fascinating paper entitled The Quantum House Of Cards, in which he explains why quantum cracking computers might never actually become fast or reliable enough to live up to the hype that marketing departments and the media have grown accustomed to publishing.
After all, a computer that is trillions of times faster algorithmically, but that also takes thousands or millions of times longer for each computational cycle, needs millions or billions of repeated calculation attempts to make up for its unreliability in the face even of minuscule amounts of electronic and mechanical interference, takes up thousands of times more space, and consumes hundreds of times as much energy, is unlikely to get you ahead in practice, even if it is promising in theory.
Despite his scepticism, however, Waintal reminds us that:
“[A] common misconception is that [some of the algorithms possible on] a quantum computer [are] necessarily exponentially difficult to calculate on classical hardware. In reality, [they are] at most exponentially hard. There are almost always hidden structures that scientists learn to exploit to speed up the calculations. The number of problems that were apparently exponentially difficult [on regular computers] and that were solved [anyway] is quickly growing.
It might very well be that some of the promises made for quantum computers will actually be kept […] by classical algorithms running on classical hardware. One could even argue that the real goal of physics is not to calculate this or that number but to unveil those hidden structures.”
Simply put: get ready anyway!
Fortunately, the computer science and cryptographic community has been busily expanding our available range of public-key encryption algorithms for many years already.
The primary motivation for doing so is a double risk posed by quantum computing: firstly, that the technology might suddenly make cryptographic attacks feasible against confidential data right at the time we share it; and secondly that personal data with a long lifetime that is snooped on today and stored for later cryptanalysis might not stay secure for as long we hoped.
The secondary motivation, as Waintal suggests above, for inventing a range of new and different encryption systems is that it means we don’t have all our cryptographic eggs in one mathematical basket.
Both of today’s widespread public-key encryption standards, namely RSA and elliptic curve cryptography, are at risk from quantum computers running Shor’s algorithm.
The reason they are simultaneously at risk, however, is not really a matter of quantum computing, but rather that the key to cracking both types of system (if you will pardon the pun) is finding any sort of algorithm that can handle the complexity of calculating what are known as discrete logarithms.
Loosely put, this represents a mathematical vulnerability that is shared by both systems.
In other words, whether it’s a quantum computing breakthrough or a brand-new classical algorithm faster than the best cracking techniques we currently know about, it’s sensible to have a range of alternative encryption systems that we can adopt if needed.
That’s why the US law that we wrote about last week not only explicitly declares that “a strategy for the migration of information technology of the Federal Government to post-quantum cryptography [PQC] is needed,” but also demands that we “prioritize developing applications, hardware intellectual property, and software that can be easily updated to support cryptographic agility.”
We already have an official list of new algorithm types we can use for public-key cryptography, adopted not in hurry but as the thoughtful outcome of a lengthy, global, public competition run by NIST.
The process that NIST followed was similar to the one used to decide which algorithms to recommend for replacing DES when its cryptographic strength began to expire.
Three algorithmic standards have already been prepared, known as FIPS 203, FIPS 204 and FIPS 205, with the daunting official names Module-Lattice-Based Key-Encapsulation Mechanism, Module-Lattice-Based Digital Signature, and Stateless Hash-Based Digital Signature respectively.
Those names are officially, and mercifully, abbreviated to ML-KEM, ML-DSA, and SLH-DSA, although you will still see and hear them referred to by the delightful and intriguing names they were given by their inventors when they were originally submitted to NIST’s competition back in the late 2010s: CRYSTALS-Kyber, CRYSTALS-Dilithium, and SPHINCS+.
The burning question, of course, is, “Should we all switch our software to these new systems right away, given that they’re already available?”
One one hand, these ‘new’ algorithms have been under public development and scrutiny by global experts for at least eight years since NIST’s PQC competition started, so they’re not entirely new and untrusted.
On the other hand, one of the algorithms, known as SIKE, seemed strong for many years and made it into the final round of NIST’s competition, but suddenly came undone in 2022 when cryptographers figured out a way to crack the supposedly quantum-safe algorithm in under an hour using a regular laptop.
Also, because these PQC algorithms are mathematically very different from today’s RSA and elliptic curve cryptosystems, they generally require larger public keys, or larger digital signatures, or both.
This adds various complications to establishing secure network communications, which have evolved to exploit the convenience that today’s encryption keys generally fit neatly into a single, standard network packet of about 1500 bytes, which is handy for protocol design and operational efficiency.
If you’re a busy network provider or online business that suddenly needs to exchange several kilobytes of data with the other end, split across multiple network packets, just to set things up to the point that you can securely send even a single byte of your own data to complete a transaction, the change might not be simple or cost-free.
Nevertheless, it can be done, as big internet bandwidth providers such as Cloudflare have described, and as widely-used security products such as OpenSSH, the most popular remote login tool in the world, have shown.
Today’s early adopters of post-quantum cryptography have generally followed what’s known as a hybrid approach, for example by baking both elliptic curve key agreement and ML-KEM into their code, tolerating additional complexity in return for a system from which either part can abruptly be dropped if needed until we’re all satisfied that the new algorithms are robust and reliable.
As Cloudflare puts it:
“There is a lot of trust in the non-post-quantum security of [the elliptic curve algorithm] X25519. Although we are comfortable in the security of ML-KEM [in its 512-bit flavour] today, over the coming decades cryptanalysis could improve. Thus, we’d like to keep a margin for now.
The inclusion of X25519 has two reasons. First, there is always a remote chance that a breakthrough renders all variants of ML-KEM insecure. In that case, X25519 still provides non-post-quantum security, and our post-quantum migration didn’t make things worse. More important is that we do not only worry about attacks on the [ML-KEM] algorithm, but also on the implementation, [which is still comparatively new].”
Simply put: the sooner you start, the longer you’ll have to make the switch safely.
Simply put: learn how to be well-placed for cybersecurity changes in general, and take care not to get distracted by the specific issues around the so-called ‘quantum apocalypse’.
Paul Ducklin is a respected expert with more than 30 years of experience as a programmer, reverser, researcher and educator in the cybersecurity industry. Duck, as he is known, is also a globally respected writer, presenter and podcaster with an unmatched knack for explaining even the most complex technical issues in plain English. Read, learn, enjoy!
Image of Deep Crack circuit board licensed under CC-BY-3.0-US thanks to Matt Crypto and Ed g2s via Wikmedia.
Image of old-school cipher machine (the infamous Lorenz Schlüssel-Zusatz 42, known to British codebreakers as Tunny) thanks to Matt Crypto via Wikipedia.