29 Dec 2023
Feisty Duck’s Cryptography & Security Newsletter is a periodic dispatch bringing you commentary and news surrounding cryptography, security, privacy, SSL/TLS, and PKI. It's designed to keep you informed about the latest developments in this space. Enjoyed every month by more than 50,000 subscribers. Written by Ivan Ristić.
The SSH protocol was found to be vulnerable to active network (MITM) attacks, enabling threat actors to drop an arbitrary number of messages from the beginning of the secure channel. The researchers have named this attack Terrapin. Although SSH is supposed to provide secure channel integrity, it doesn’t quite work in practice. There are three widely used encryption modes that are not fully secure.
Most clients/servers are vulnerable to this attack, but there are some encryption modes that are not affected. The researchers estimate that about 57 percent of servers prefer one of the vulnerable modes at the moment. Terrapin can be defeated if strict key exchange (RFC 8308) is enabled, but this mode has to be supported by both client and server; be very careful if reconfiguration is needed, or you may lose access to your server. The researchers are keeping track of which SSH implementations have been patched to support and use strict key exchange.
On a related note, we spent the better part of the last decade developing TLS 1.3, QUIC, and HTTP/3, so perhaps it makes sense to use these as the foundation for other protocols as well? Now someone has done exactly that: SSH3 is a variant of the SSH protocol built on top of HTTP/3. The trend of running everything on port 80 (now 443) continues.
This subscription is just for the newsletter; we won't send you anything else.
When it comes to domain name validation for certificate issuance, does it count as delegation to a third party if a CA uses an online tool or a caching DNS resolver operated by another company? This is an important question because the Baseline Requirements do not allow any delegation of domain name validation to third parties (section 1.3.2).
Buypass, a CA from Norway, is dealing with an incident at the time of writing that stems from the company using Google’s public DNS server as part of its validation process. Andrew Ayer considers this to be delegation [to a third party] and so does Ryan Sleevi. There is a precedent involving Izenpe (another CA) from three years ago.
Reading Baseline Requirements 2.0.1, it’s not obvious what is correct. The document has a definition of delegated third party, but there is no mention nor a discussion of what constitutes infrastructure. Although using someone else’s caching DNS resolver is definitely not a good idea in this situation, it is not unreasonable to take the view that a deterministic third-party service is a building block (infrastructure) in a process controlled by the CA. On the other hand, if we take a view that CAs must have 100% control of the validation process, wouldn’t that make internet connectivity delegation as well given that there are third parties involved as intermediaries in the communication, delivering IP packets to and from the required destination?
A good solution in this case would be for the Baseline Requirements to explicitly require validation against authoritative DNS servers and, ideally, multiple vantage points for extra security. Because DNSSEC is a thing and validating it isn’t onerous, it should be included in the requirements as well.
If we take a wider view, we should also consider building systems and testing procedures that are designed to automatically validate and continuously test CAs’ compliance with those aspects where automation is possible. Adversarial testing, in particular, should be mandatory given the importance of public PKI.
In the EU, the politicians are predictably continuing on their path to change how browsers interpret certificates and identity, with the Industry Committee approving the provisional text. Because of a lack of transparency, it’s difficult to know what exactly was approved. We’re still in the dark as to how browsers will be impacted and how the security of the delicate internet PKI ecosystem will be preserved.
If you recall, hundreds of scientists signed an open letter that warned of the dangers of the legal wording as it currently stands. Since our last newsletter, the Council of European Professional Informatics Societies (CEPIS) agreed as well.
One leaked document, published with commentary by Alec Muffett, gave a rare glimpse into the internal thinking. The signatories of the open letter followed up with their response. It took the world about three decades to get internet PKI to a reasonably secure state. From experience, we know that the EU’s “trust us, we know what we’re doing” approach isn’t going to give us the security we need.
Researchers are starting to look into whether AI can help us write code that’s more secure. According to one paper, the situation is going to get worse in the short term as they discovered that AI-assisted code resulted in more vulnerabilities. Another study looked at the task of translating code from one programming language to another. Here, too, large language models (LLMs) were found to be significantly lacking.
Coding is a very unique activity in which there is no room for error, so perhaps LLMs are not the best tool to use? On the other hand, there are a variety of tasks in which LLMs could help identify patterns and improve the signal-to-noise ratio. One study looked at the performance of Microsoft Security Copilot and found that it greatly helped novices.
Here are some things that caught our attention since the previous newsletter:
Designed by Ivan Ristić, the author of SSL Labs, Bulletproof TLS and PKI, and Hardenize, our course covers everything you need to know to deploy secure servers and encrypted web applications.
Remote and trainer-led, with small classes and a choice of timezones.
Join over 2,000 students who have benefited from more than a decade of deep TLS and PKI expertise.