by Nabeel Yoosuf

The many, many ways that cryptographic software can fail

Qp2sR2sTEbA0An6QAfaVsPDRxbDlP-kVxpel
Breaking cryptographic software via methods other than cryptoanalysis

When cryptographic software fails, what’s to blame?

Algorithms?

Cryptography libraries?

Apps incorrectly using those libraries?

Or is it something else entirely?

We rely on cryptographic algorithms and protocols every day for secure communication over the Internet. We’re able to access our bank accounts online because cryptography protects us. We’re able to send private messages to our friends because cryptography protects us. We’re able to buy and sell things using credit cards and Bitcoin because cryptography protects us.

Let me give you a concrete example of this. When you check your email through your favorite browser, the connection between your browser and the email server is secured using the TLS (transport level security) protocol, so that no one can eavesdrop on your emails or modify them in transit without your knowledge.

In short, without cryptography, the Internet we know today could not be possible. Law and order on the internet depends on cryptography.

But this tool that we all rely upon so heavily is also quite brittle. Our cryptographic software often lets us down. Sometime it really lets us down.

Have you ever wondered why the cryptographic software — including implementations of the TLS protocol — fail over and over again?

According Veracode’s state of security reports, our cryptographic software is just as vulnerabilities as it was two years ago.

fe3A-m3hCNxrQMhkYQeuRcvTJB287TL-ot73
Veracode ranked cryptographic issues as #2 vulnerability found in apps in 2015
H94qJr0QwDiVakmegPWDXbYSmRMMb9tZED-O
Veracode again ranked cryptographic issues as #2 vulnerability found in apps in 2o16

Are these failing because of weaknesses in the underlying cryptographic algorithms?

Well, several past attacks (Apple iOS TLS, WD self encrypting drives, Heartbleed, WhatsApp messages, Juniper’s ScreenOS, DROWN, Android N-encryption and so on) show us that our cryptographic software is less likely to be broken due to the weaknesses in the underlying cryptographic algorithms. In other words, cryptanalysis is one of the less likely threats to our cryptographic software.

9w3A4G6y-5zto71XmJxbTyDz8b5A6SwyVtaV
A sketch of the AES algorithm (image credit) AKA why you don’t want to roll your own cryptography.

Have you ever heard an attacker breaking a 256-bit AES encryption algorithm to recover the secret hidden within it? None that I know of. (Of course, if you use a vulnerable obsolete cryptographic protocol like DES or RC4, cryptanalysis might help break the software). So if the culprit isn’t cryptanalysis, then what is it?

aK1G79PpqMPtvs478FBokRR0Imh5j-1rDK3j
Your security is only as good as its weakest link.

Well, it’s everything but cryptanalysis. In other words, cryptanalysis is not the weakest link of cryptographic software. Bad actors use numerous other weak links to break cryptographic software.

Cause of failure #1: bugs in crypto libraries

One popular example is the Heartbleed bug.

VwqT62a1y1lEhMiMcUkOwUnt7yaVV5a6pj5I

What’s the matter with Heartbleed? This bug (CVE-2014–0160) was introduced due to an incorrect implementation of the TLS heartbeat extension in the widely-used OpenSSL (read 66% of the internet), which is used to support TLS in web servers. What does this extension do? As the the name suggests, it’s a keep-alive feature where one end of the connection sends a payload of arbitrary data and the other end is supposed to send the exact copy of the data to prove that all is fine and well.

The bug turned out to be an age-old mistake of not bound checking before memcpy() that uses non-sanitized data. The vulnerable OpenSSL implementation does not validate the payload length against the actual payload. An attacker could lie about the length and get the victim to send more bytes from its memory, as shown in the following diagram.

iukZ8VzrmG8b3MRRD6xad7xNMBJV186XrGzp
Attacker sends only one byte payload but sets the length to 65535; the victim blindly copies 65535 from its memory and sends back to the attacker.

This in turn allowed the attacker to obtain session keys and other secret information (like your username and password) from any websites currently in your browser’s memory.

Let me show you the code. The patch is essentially a bound check added to the patched version 1.0.1g as shown below.

====== Vulnerable code =======/* Enter response type, length and copy payload */*bp++ = TLS1_HB_RESPONSE;s2n(payload, bp);memcpy(bp, pl, payload);
====== Patched code =========hbtype = *p++;n2s(p, payload);if (1 + 2 + payload + 16 > s->s3->rrec.length)  return 0; /* silently discard per RFC 6520 sec. 4 */pl = p;

Lesson learned: Always bound check your strings before using them. Sanitization is vital for stopping bad inputs from getting into your system.

Cause of failure #2: operating systems and apps

You probably remember Apple’s “goto” bug (CVE-2014–1266) in its SSL/TLS implementation, disclosed in February 2014.

Apple’s code with the “goto” bug:

1 static OSStatus2 SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa,                                  SSLBuffer signedParams,3                       uint8_t *signature, UInt16 signatureLen)4 {5   OSStatus err;6 …78   if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)9     goto fail;10  if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)11    goto fail;12    goto fail;13  if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)14    goto fail;15  …1617 fail:18   SSLFreeBuffer(&signedHashes);19   SSLFreeBuffer(&hashCtx);20   return err;21 }

So, what’s the issue here? The extra goto statement on line 12 bypasses all certificate checks for SSL/TLS connections in iOS and Mac devices. This makes lines 13 to 16 effectively dead code. This simple implementation mistake accepts any invalid certificate, making the connection susceptible to Man in the Middle attacks.

I was curious to find out whether the implementation bugs in crypto software are more due to bugs in the crypto libraries themselves than in the way apps use them. Well researchers from MIT analyzed 269 cryptographic bugs reported in the Common Vulnerabilities and Exposures database between January 2011 and May 2014. They found that only 17% of bugs are caused by the crypto libraries themselves. The remaining 83% are due to misuse of crypto libs by app developers.

But just because the majority of bugs are due to misuse of crypto libraries in apps doesn’t mean that we can just blame app developers and get on with our day.

There could be many reasons behind the above statistics on the crypto misuse. The crypto libraries themselves may not be providing safe default options, may not have adequate documentation or may be difficult to use. Further, many developers may not have a formal understanding of applying cryptography in their software, even though they are experts at software development itself. These all could result in the misuse of crypto libs.

Lesson learned: always use tools to analyze your code. A dead code analysis tool should have caught this specific case.

Cause of failure #3: bad design

In 2015, researchers uncovered a series of issues in WD self-encrypting drives. There were serious design flaws in their use of cryptographic algorithms. I wrote about this in a previous post. Let me show a couple of flaws here.

ViJwzCkWz2D6nmxHoE4YQHOahxXMhkPzv-85
WD’s self encrypting drive architecture

Following the best practices, WD did use two levels of keys to encrypt documents stored in the drive — master KEK (Key Encryption Key) and per file DEK (Data Encryption Key). Further, they did use a key derivation function to derive KEKs from the password.

But the way they designed the key derivation function itself was totally insecure. They used a fixed salt and a fixed number of iterations. Thus, it was susceptible to pre-computed hash table-based attacks. Attackers could recover keys much faster than a pure brute force attack would have been able to.

xNOCC3T-Ejcup3h89XWOtgE103A8VD-2Hj3H
WD’s vulnerable key derivation algorithm

And if this vulnerability weren’t enough, WD used a dismal random number generator to generate KEKs. It was not only predictable — it also didn’t have enough complexity (only 40 bits).

Cryptographic protocols critically rely on cryptographically secure pseudorandom number generators. If these aren’t secure enough, any cryptographic algorithm or protocol using these random numbers will be quite easy to break.

GrNo2Nf1RAlPVyTrANT0mmHLv9bAevCTSgAz
WD’s weak random number generator

Lesson learned: Have a good understanding of cryptographic constructs and know their limitations. Follow industry best practices for key derivation.

Cause of failure #4: misconfigurations or insecure default configurations

3-VGZNwxuYBBHi5dmH588z5M9lmD9YA9VlEb
Exploiting the weaknesses of SSLv2 (source)

DROWN attack of breaking TLS connections via SSLv2 is a good example of this. You may be using fairly secure TLS connection to communicate with a web server, but if the web server still supports (which it shouldn’t) old SSLv2, an attacker can exploit it to break the security provided by TLS and get at your keys and other sensitive information.

SSLv2 has long considered to be broken, and none of the clients today use it for secure connections. But researchers have found that out of 36 million HTTPS servers they probed, 6 million (about 17%) still supported SSLv2.

DMIkoMXQS6V0KBwPI1whg2KPQTUkaFHYhADT

The above research also uncovers another common lazy practice of using the same key pair in different servers of an organization. It shows how even when one server supports only TLS, if there are other servers supporting SSLv2 with a shared certificate, the server that only supports TLS is vulnerable as well.

Lesson learned: a system is only as secure as its weakest link. Try to protect all of your systems at least reasonably well.

There are lots of other ways cryptographic software can fail

Can you think of some additional ways?

It fails due to users. How? Think about social engineering attacks. RSA SecureID breach is said to originate from phishing emails exploiting users and a zero day vulnerability.

It fails due to unrealistic threat models (Breaking web applications built on top of encrypted data).

It fails due to hardware (Breaking hardware enforced technologies such as TPM with hypervisors).

It fails due to side channels (Timing attacks on RSA, DH and DSS algorithms).

As you can see, cryptographic software can fail due to many reasons. Are we really doomed to never get cryptographic software right? Or can we at least can reduce the number of such failures? Why can’t we learn from the past and avoid the same mistakes happening again and again? What tools will help us spot most of these issues?

Our situation actually isn’t all that bleak. There are ways to prevent most of the failures discussed above. In a follow up post, I’ll explore the topic of how we can make cryptographic software fail less often.

Thanks for reading. If you found this article useful, please click the ? below so that others can see this on Medium.

Further Reading