The conventional assumption in computer security was that our primary adversaries were criminals, miscreants, and the security services of our “political foe”. Attacks were liable to be active and involve the exploitation of vulnerabilities in our systems, because such foe were unlikely to be able to access . On the basis of these assumptions and allegations, there were comments from people such as the US government proposing bans upon the use of communications equipment by Chinese companies Huawei and ZTE in the US.
In the light of the PRISM and TEMPORA revelations, the hypocrisy was deafening. In the light of the latest revelations, its’ hard to find a source of humor at all.
We have met the enemy, and he is us; our governments have invaded our communications infrastructure, have access to the data behind our services, and have installed backdoors in our software.
If we are to preserve privacy on the web, the need for change has never been greater.
I think it can be said that, at an infrastructure level, our immediate priorities should be:
- Deploying TLS v1.2 with Perfect Forward Secrecy
Older versions of TLS have less effective PFS cipher-suites, and often require undesirable tradeoffs. When dealing with TLS 1.0, we are often required to judge between the weaknesses of RC4 (no longer considered secure when used as in TLS/SSL) and potential vulnerability to the BEAST attack. It’s safe to say that TLS 1.0, as deployed, is fundamentally broken; while the protocol itself is not completely so, the cipher-suites deployed in the wild all have issues. Deploying a patch to TLS 1.0 would be as difficult as updating to TLS 1.2; an action we should be doing anyway
- Deploying DNSSEC, DANE and Certificate Pinning
Given the known extensive partnerships of the secret services, I think it is fair to say that the CA model has outlived its’ usefulness. Browsers ship hundreds of security certificates; it is sheer naivety to assume that none of them have been compromised. I don’t see this as the complete end for CAs; they can continue to provide utility in the form of extended validation services, but their numbers will be reduced and, importantly, it will no longer be able to trust every CA to ensure the trust in a domain.DNSSEC itself poses issues; it has a single root of trust (the IANA), and it is a requirement that we trust our domain registrar as part of the chain of trust to ourselves. However, it vastly reduces the number of moving parts and trusted authorities, and makes validating that trust significantly easier. Attacks against DNSSEC need to be narrowly targetted to be effective; comparison of DNSSEC signed zones across multiple machines provides a simple method of watching for suspicious behavior.
The other result of DNSSEC is that it makes deploying encrypted services easier. Anything we can do to increase the proportion of encrypted traffic on the internet can only be a good thing.
- Development of TLS v2?
The existing TLS specification is a gradual evolution of SSL. It is therefore old, battle tested and, as a protocol, a well known quantity. As a side effect, it is also complex; it contains many misfeatures, and our evolving understanding of cryptography points out many parts of TLS which are at the very least suboptimal, or often highly problematic. Mitigating its’ many design flaws has resulted in huge increases in the complexity of the codebases implementing TLS; large portions now need to run in constant time to avoid timing-based side channel attacks. NSS and OpenSSL are monstrous, often convoluted code bases. Validating them is troublesome; performing constant time cryptography on modern processors is increasingly difficult, identifying potential ways in which this can become non-constant-time is nigh impossible. Given all we know today, I think it’s time to take a step back from TLS, take a hard look, and for every known issue, misfeature or design problem, fix it. TLS 2.0 need not have more in common with TLS 1.2 than an the initial hello packets; it should be built on modern, known good primitives; PCKS-#1 v1.5 padding wholly unsupported, replaced by OAEP; authenticated encryption used where possible, encrypt-then-MAC where not.The chipersuite list should be slimmed down; current “good practice” ciphers like RSA and AES and authenticated encryption modes like AES-GCM can stay, but state of the art ciphersystems like Salsa/20 and Curve 25519 should also be included and recommended. While it is likely that the NSA has cryptanalysis knowledge we do not, we do not know of any near-realistic attacks in the current good practice ciphers, and the nature of cryptanalysis says that their capabilities are unlikely to bring them close to breaking either. Even cryptanalytic attacks considered “groundbreaking” rarely result in practically exploitable flaws.
Instead, we should be changing because the NIST-sanctioned cipher-suites are not designed with us in mind. Constant time implementations of AES in software are notoriously difficult, and this goes double for systems like AES-GCM. SHA-3 contains many primitives which are efficient to implement in hardware, but difficult in software; the NIST elliptic curves use constants which result in inefficient software implementations (neither mind not providing rationale for the choice of said constants).
Cryptographic algorithms designed with software in mind reduce the cost of implementing it, increasing security, while also reducing the number of corner cases and attacks which can result in information leakage on common hardware. They additionally reduce the performance delta against a well equipped adversary (such as a government security agency).
The prime objective of a TLS revision should be that a TLS2 implementation be relatively concise and easy to validate, without the inscrutable complexity of TLS1. The TLS2 paths in OpenSSL and NSS should not be as convoluted and twisty as those for TLS1, and the preferred algorithms should be those which do not require large tables or similar constructions liable to suffer from side channel attacks.
A good rule of thumb is that the maths is hard to subvert; the code is easy. To that end, we should push towards simpler protocols which are easier to analyze. We should also push towards better defaults from the libraries we use; it is ridiculous that OpenSSL, for example, doesn’t come secure by default.