27 January 2016
Bulletproof TLS Newsletter is a free periodic newsletter bringing you commentary and news surrounding SSL/TLS and Internet PKI, designed to keep you informed about the latest developments in this space. Maintained by Hanno Böck.
Researchers from the Prosecco team at INRIA published a number of attacks that exploit the use of weak hash functions in TLS and other protocols. They called their attack SLOTH (Security Losses from Obsolete and Truncated Transcript Hashes). The most severe attack is affecting the systems that use of client certificates and continue to support RSA-MD5 signatures.
It’s been known since 2005 that MD5 hash collisions are easy to carry out. Many practitioners have argued in the past that the hash collisions don’t matter in certain scenarios and that the security of many protocols only relies on the so-called second preimage resistance. The INRIA researchers debunked many of these claims with their new publication.
The most notable scenario where an attack is possible is when client certificates are used. If a user authenticates himself at a malicious server using a client that supports RSA-MD5 signatures, then the server can use the information provided to impersonate the user on some other target server (which must also support RSA-MD5).
A surprising aspect of this is that TLS 1.2 is vulnerable to this attack while prior versions are not. The reason for this is that TLS 1.2 allows negotiation of the signature algorithm (prior versions always use a concatenation of MD5 and SHA1) and, crucially, still supports the insecure MD5 as an option. This is very surprising given the fact that TLS 1.2 was published in 2008—several years after practical attacks against MD5 were announced.
This attack is made worse by the fact that various implementations accept RSA-MD5 signatures even if they advertise that they don’t do this. Several cryptographic libraries have received updates in response to this research, including NSS, GnuTLS, BouncyCastle and mbedTLS. An old version of OpenSSL (before 1.0.1f) was also affected.
In a talk at the 32th Chaos Communication Congress (32C3), Nick Sullivan from CloudFlare mentioned a new potential attack against the TLS handshake called CurveSwap. The attack is based on the fact that the negotiation of the elliptic curve used for a TLS connection is unauthenticated. The attack poses only a theoretical risk, but it may be advisable to disable potentially weak elliptic curves.
In order to exploit this attack it would be necessary to calculate a discrete logarithm in some weak curve that is supported by both the server and the client. If an attacker could do that, he could force the connection to this weak curve.
At the moment this attack is only theoretical, because TLS doesn’t support elliptic curves weak enough to allow an attacker to attack them successfully. However, it seems reasonable to remove support for potentially weak curves from TLS implementations as a precaution. Almost all TLS implementations default to the NIST P-256 or stronger curves. Removing all curves weaker than 256 bits should cause no issues. Some preliminary research in the past years has generally questioned the security of binary curves, therefore disabling them seems reasonable as well.
In the December Newsletter we briefly mentioned the discovery of backdoors in some Juniper devices. Since then, several researchers uncovered further interesting information. Questions still remain on whether Juniper itself added the backdoors or if they were planted by some other third parties.
Ralph-Philipp Weinmann published its analysis of one of the backdoors that is related to the Dual EC DRBG random number generator. It’s been known for a long time that whoever creates the parameters for Dual EC can also create a secret parameter that will allow them to attack the random numbers. According to Weinmann, Juniper did not use the standard parameters for Dual EC which are likely backdoored by the NSA. When the backdoor was introduced, these parameters were changed. So it looks like there was an original backdoor that later got changed into a different backdoor by someone else. Dual EC was chained with another random number generator (ANSI X9.31), which would have neutralized this backdoor. But Willem Pinckaers found out that the code contained a bug and therefore this second random number generator was not active.
Further research on the matter was presented by Hovav Shacham at the Real World Crypto conference. When Dual EC was originally introduced into ScreenOS, another change happened at the same time: The nonce for the IKE-part of the IPSEC protocol was extended from 20 to 32 bytes. As it turns out, 32 bytes is just enough to enable an attack on Dual EC. It seems therefore possible that Dual EC might had been originally added in order to create a backdoor, but that someone else (not Juniper) later changed the parameters to take control over the backdoor.
Juniper published a blog post explaining the use of Dual EC and announced that it will remove this questionable random number generator in future versions of ScreenOS.
The second backdoor in the Juniper devices was a default SSH password. H. D. Moore from Rapid7 published that password in his blog post.
Juniper isn’t the only company apparently having problems with backdoors into their devices. A mail on the Full Disclosure mailing list exposed another potential SSH authentication backdoor in devices from the company Fortinet. This wasn’t a simple SSH password backdoor like in the Juniper case, instead it was a custom SSH authentication method. Fortinet published a statement in which it denied that this was a backdoor. This only affects older versions of the FortiOS firmware and was fixed in 2014.
A few days later Cisco also published a security advisory announcing that some of their devices had a default SSH password in several router models.
In the December Newsletter we talked about plans of CloudFlare and Facebook to continue using SHA1-signed certificates. They proposed a new certificate validation method called Legacy Validation (LV) that would allow them to continue to support SHA1 certificates in the future. Recently Twitter also joined that call. Some certification authorities seem to have plans to issue certificates without the formal process of changing the rules of the CA/Browser-Forum.
Comodo has pulled one of its old root certificates that is no longer used for normal issuances from browsers and uses it to deliver SHA1-signed certificates. Symantec has recently also pulled an old root certificate from browsers and announced that it no longer intends to comply with the Baseline Requirements with this root certificate. Symantec has not publicly declared what it intends to do with this certificate, nor answered a question on whether it plans to use this certificate in a similar way as Comodo. It only said that the certificate “will now be repurposed to provide transition support for some of our enterprise customers’ legacy, non-public applications”.
Chrome developer Ryan Sleevi posted two long texts (Part 1, Part 2) where he explains why he thinks the LV proposal is a bad idea. Nick Sullivan from CloudFlare joined the debate and explained how entropy in the serial number of certificates can mitigate attacks against the weak hash function.
For Mozilla, the removal of SHA1-support caused some trouble. On January 1st Firefox 43 stopped supporting certificates signed with the SHA1 algorithm. It turns out that some software products using man-in-the-middle attacks to intercept TLS traffic support only SHA1. As a result, they stopped working when Firefox disabled SHA1 certificate support. Therefore Mozilla has temporarily re-enabled SHA1 support in Firefox 43.0.4.
OpenSSH client had a severe vulnerability (CVE-2016-0777) that could leak the user’s private key to a malicious server. The faulty code was part of a roaming feature that SSH implements, but which is rarely used, because OpenSSH server code does not support it. Therefore this vulnerability has some similarities to both the Heartbleed and Shellshock bugs. All three affected a rarely used feature that was enabled by default.
The vulnerability was discovered by Qualys and is based on an integer overflow that will cause a memory overread. Therefore a server can read some parts of a user’s memory. Due to the IO buffering of the libc functions, this memory might contain pieces of a user’s private key.
Filippo Valsorda added a check for the vulnerable roaming code to a test server he runs. (The test was originally designed to show user identification based on their SSH public keys on their Github accounts.)
Qualys also found a buffer overflow (CVE-2016-0778), but that is only exploitable in rare circumstances when a user enables several non-standard features. The OpenSSH release 7.1p1 fixes both issues and also another out of bounds read error (CVE-2016-1907) found by Ben Hawkes.
This subscription is just for the newsletter; we won't send you anything else.
Designed by Ivan Ristić, the author of SSL Labs, Bulletproof SSL and TLS, and Hardenize, our course covers everything you need to know to deploy secure servers and encrypted web applications.
Remote and trainer-led, with small classes and a choice of timezones.
Join over 1,500 students who have benefited from more than a decade of deep TLS and PKI expertise.