Is hybrid encryption the answer to post-quantum security?

If you wear suspenders, do you need a belt? If you have one parachute, do you need a reserve? Many CISOs, security teams, and cryptographers are asking a similar question about encryption algorithms when they choose the next generation of protocols. Do users need multiple layers of encryption? Do they want the complexity and cost, too?

What is hybrid encryption?

Many discussions of “hybrid encryption” begin with some debate about just what this means. Hybrid encryption in general refers to the combined use of public-key (asymmetric) cryptography with symmetric encryption. Systems that combine multiple algorithms are common and mathematicians have been marrying different algorithms to leverage their advantages for some time. For instance, many public-key systems use the public-key algorithms only to scramble a symmetric key that is then used to encrypt the data. Symmetric algorithms like AES are generally dramatically faster, and the hybrid approach captures these benefits.

The topic is receiving plenty of attention now because of the rollout of the post-quantum algorithms developed through NIST’s post-quantum cryptography (PQC) competition. Some wonder if the new approaches are trustworthy, and they’re hoping that some kind of hybrid approach will bring more assurance during any transition.

Adding layers is not a new solution. When Stuart Haber and Scott Stornetta designed a time-stamping service that became the company Surety, they used two different hash functions in parallel. They also designed the protocols so that newer, better hash functions could be added later. “In the face of unexpected algorithmic advances in cryptanalysis, we wrestled with the problem of how best to swap in a new hash function in an existing widely deployed authentication system,” Haber tells CSO. “We didn’t want to stake everything on just one algorithm.”

Google is already proceeding down the path of using hybrid algorithms. Last year, Chrome and some servers started negotiating session keys using a combination of two algorithms:

In a recent announcement, Devon O’Brien, technical program manager for Chrome security,  said that the extra Kyber-encapsulated key material added about 1,000 bytes to each TLS ClientHello message, an overhead that was not an issue for the “vast majority” of users.

Post-quantum cryptography solutions not foolproof

The post-quantum rollout has been both exciting and frustrating. NIST has extended deadlines multiple times before settling on potential solutions. It’s also added new rounds to encourage developing more approaches in case some of the other techniques prove insecure. Anyone who expected the competition to produce one perfect solution to carry us safely into the future was disappointed. The competition easily produced as many questions as answers.

The danger of choosing a new algorithm with hidden flaws isn’t a theoretical threat. Already, several algorithms that made it into the final rounds showing plenty of promise turned out to be easily broken. The Rainbow signature scheme and Supersingular Isogeny Diffie-Hellman protocol (SIDH) have both been compromised, for instance.

Hybrid encryption a hedge against weaknesses

If hidden weaknesses are always a potential threat, then hybrid solutions seem like an ideal approach. Instead of just swapping out a perfectly good but aging algorithm like RSA or AES and replacing it with a new but barely tested version, why not use both? Or even three or more? Why not compute several signatures or encrypt the data again and again?

Whitfield Diffie, a cryptographer who’s worked on developing some of the widely used public key encryption algorithms, believes in the hybrid approach. “It’s indispensable for a transition to PQC,” he tells CSO.

The downsides of hybrid encryption

The critics, though, offer these reasons why hybrid solutions may not be ideal:

  • Increased complexity: They need at least twice as much code to write, debug, audit and maintain.
  • Decreased efficiency: They need at least twice as much computational overhead for encrypting or decrypting any data or the session key.
  • Inconsistent structures: The algorithms are not easy drop-in replacements for each other. Some signature algorithms, for example, have single-use keys while others don’t.

“It’s hard enough to implement one standard correctly. Using two in parallel opens up more risk of implementation errors or creating new types of attacks.” Steve Weis, a principal engineer at Databricks, tells CSO. “Also, performance still matters in many contexts where incurring two times or more the computation costs and payload size is a non-starter.”

One of the most prominent critics of the new approach is the National Security Agency (NSA), the United States government entity with a longstanding interest in developing secure encryption. Over the last few years, the NSA has discouraged the push for hybrid algorithms, citing many of the reasons given above. They were also joined by GCHQ, the British group that often works together with the Americans through an alliance that is loosely known as the “Five Eyes.”

“Do not use a hybrid or other non-standardized QR solution on NSS [national security system] mission systems,” the NSA wrote in an FAQ on the transition to post-quantum algorithms. “Using non-standard solutions entails a significant risk of establishing incompatible solutions.”

The NSA, though, has a Janus-faced mission. On one side, they’re responsible for ensuring that the country’s communications are secure. On the other side, they also routinely break codes to gather intelligence. Many wonder which mission the NSA may be serving.

Other national cryptographic agencies like France’s ANSSI and Germany’s BSI (Federal Office for Information Security), though, are taking a different approach. They encourage the assurance that comes from using multiple layers. “The secure implementation of PQC mechanisms, especially with regard to side-channel security, avoidance of implementation errors and secure implementation in hardware, and also their classical cryptanalysis are significantly less well studied than for RSA- and ECC-based cryptographic mechanisms.” concluded the German Federal Office of Information Security (BSI). “Their use in productive systems is currently only recommended together with a classic ECC- or RSA-based key exchange or key transport.”

For some internal classified work, the NSA also pushes multiple layers of encryption. Their guidelines for using commercially available software in classified environments frequently encourage using multiple “layers” of independent packages.

How much security does hybrid encryption provide?

One of the biggest debates is how much security hybridization offers. Much depends on the details and the algorithm designers can take any number of approaches with different benefits. There are several models for hybridization and not all the details have been finalized.

Encrypting the data first with one algorithm and then with a second combines the strength of both, essentially putting a digital safe inside a digital safe. Any attacker would need to break both algorithms. However, the combinations don’t always deliver in the same way. For example, hash functions are designed to make it hard to identify collisions, that is two different inputs that produce the same output: (x_1 and x_2, such that h(x_1)=h(x_2)).

If the input of the first hash function is fed into a second different hash function (say g(h(x))), it may not get any harder to find a collision, at least if the weakness lies in the first function. If two inputs to the first hash function produce the same output, then that same output will be fed into the second hash function to generate a collision for the hybrid system: (g(h(x_1))= g(h(x_2)) if h(x_1)=h(x_2)).

Digital signatures are also combined differently than encryption. One of the simplest approaches is to just calculate multiple signatures independently from each other. They can be tested independently afterwards. Even this basic approach raises many practical questions. What if one private key is compromised? What if one algorithm needs to be updated? What if one signature passes but one fails?

Cryptography is a complex subject where many areas of knowledge are still shrouded in a deep cloud of mystery. Many algorithms rest upon assumptions that some mathematical chores are too onerous to accomplish but there are no rock-solid proofs that the work is impossible.

Many cryptographers who embrace hybrid approaches are hoping that the extra work more than pays off should a weakness appear. If it’s worth putting in the time to get one layer right, it’s often worth it to do it again. The high-performance applications can turn it off, but those that need it want extra assurance. 

“We’re stuck with an argument from ignorance and an argument from knowledge,” explains Jon Callas, distinguished engineer at VATIK security. “It’s taken us decades just to get padding right. You can say RSA [cryptography] is broken, but we don’t know anything about the new algorithms.”

Data and Information Security, Encryption