Is Convergence really solving the SSL problem?
At DEFCON 2011, Moxie Marlinspike presented a possible solution to the “big SSL problem”: Convergence, a clever way to remove the need of certificate authorities. But is it really going to solve it?
Let’s step back a little. We all know that SSL is kind of broken because of the need to rely on certificate authorities. Moxie himself has a great blog post on the subject. In a word, we don’t want to pay certificate authorities, we have too many of them (up to 70 in recent browsers/OS), we can’t really trust all of them, and we don’t have an easy way to revoke trust in the certificates they issue.
Convergence starts from the idea that it would be really great to avoid CAs altogether and use self-signed certificates, but self-signed certificates are vulnerable to man-in-the-middle (MITM) attacks. So the clever idea is noting that MITM is a local attack: it’s either someone next to you drinking a coffee at Starbucks, or someone that hacked your ISP’s DNS, or maybe someone working for a corrupted government that’s hijacking traffic at the BGP level. It’s unrealistic that the same MITM attack can affect you, someone in Iceland, someone in China, someone in West Virgina, and someone in Italy at the same time, right? And that’s what Convergence exploits: it gives you a way to compare SSL certificates fetched from the website you want to visit from many different servers in the world, called “notaries”. If they all match, a MITM attack is impossible, and you can trust the self-signed certificate and proceed logging in into your bank. Right?
Wrong. Because you know what else self-signed certificates are vulnerable too, in addition to MITM? Lies. And you know who lies? A phisher. So if a phisher registers bankoffamerica.com (pay attention to the typo!) and self-signs the website with a SSL certificate saying that the organization behind the website really is Bank of America Corporation incorporated in Delaware, all notaries will report that the certificate is exactly the same as fetched from different parts of the world, and you will get absolutely no warning.
And what is worse is that you will lose any EV indication while browsing with Convergence, since Convergence simply does not currently support any way to validate the identity of the website. As Moxie himself says, “Convergence does not enable EV for self-signed certs. It is concerned with authenticity, not identity”.
So to clarify: if you start browsing today with Convergence (and assuming you get a good set of notaries to bootstrap), you get the following effects:
- You will be immune to MITM attacks with rogue certificates, like in the current Diginotar’s debacle.
- The CA list in your browser will be effectively ignored.
- You will stop seeing any information from EV certificates (identities of the sites your browse).
I personally don’t consider this a good compromise. It might be that I don’t live in China or Iran, but MITM attacks are an order of magnitude less common than phishing attacks, and people are slowly learning to trust EV certificates when browsing the Internet. It’s true that EV SSL certificates could be forged as well, I don’t dispute that. But right now with Convergence, I’m going to trade a simple protection against a common set of attacks with a good protection against a rare set of attacks.
I reached Moxie with these concerns, and he clarified that, technically speaking, Convergence could be enhanced to check for existing EV SSL certificates (through a custom notary), but that he doesn’t see EV certificates as solving any real problem nowadays. I beg to disagree: I don’t like EV per-se as well, but I think the identity problem is still something that must be solved on the Internet, and it’s probably even more important than solving the MITM problem.
Given that the number of websites which are targets of phishing attacks are relatively small because it’s mainly a group of high-profile sites (banks, web mails, social networks, etc.), and given that SSL certificates do not change so often (usually no more than once in a year for a high-profile website), there must be a way to conceive a global list of validated certificates for which an identity can be certified, even through a crowd-sourced mechanism. It has to be simpler than what GPG attempts to do with key-signing parties, because we don’t need to certify the identity of Mr John Green that you never met before, plus other 1 billion people; you just need to certify that Google is Google and PayPal is Paypal, for a one thousand of high profile websites. If you ask 10 people in 10 different countries to give you the SSL fingerprint for “Google, Inc.”, and you get 10 identical fingerprints, you can be 100% sure that the certificate you get is really for “Google, Inc.”. And if Google commits to use the same certificate for the next 3 months, you could globally cache this information, and distribute it to web users through notaries in a way that their address bar says “This is a certified Google Inc. website”. Or, in other words, “This is the same Google Inc. website that 1 million of people have visited in the last 2 hours, and 100 millions in the last 24 hours”.
If Convergence could be augmented to do something similar, I think we would be getting closer to the final solution of the SSL problem.
Golomb-coded sets: smaller than Bloom filters
While reading the super-interesting Imperial Violet, Adam Langley’s weblog, I stumbled upon a new data structure that I had never heard of: the Golomb-coded sets (GCS). It is a probabilistic data structure conceptually similar to the famous Bloom filters, but with a more compact in-memory representation, and a slower query time.
A bloom filter with the optimal number of hash functions (= bits per element) usually occupies a space in memory that is N * log2(e) * log2(1/P) bits, where N is the number of elements that you want to store, and P is the false-positive probability. To put things into perspective, let’s say you want to store 100K elements with 1 false positive every 8K elements. Given that log2(8K) ≈ log2(8192) = 13, and log2(e) ≈ 1.44, you need 100K * 1.44 * 13 bits ≈ 1828 KiB ≈ 1.78 MiB.
The theoretical minimum for a similar probabilistic data structure would be N * log2(1/P), so a bloom filter is roughly using 44% more memory than theoretically necessary (log(e) = 1.44). GCS is a way to get closer to that minimum.
GCS is well suit in situations where to want to minimize the memory occupation and you can afford a slightly higher computation time, compared to a Bloom filter. Google Chromium, for instance, uses it to keep a local (client) set of SSL CRL; they prefer lower memory occupation because it is specifically important in constrained scenarios (e.g.: mobile), and they can afford the structure to be a little bit slower than Bloom filter since it’s still much faster than a SSL handshake (double network roundtrip).
Turning words into values
GCS is actually quite simple, and I will walk you through it step by step. First, let’s agree on a dictionary of words you want to put into the set, for instance, the NATO alphabet:
['alpha', 'bravo', 'charlie', 'delta', 'echo', 'foxtrot',
'golf', 'hotel', 'india', 'juliet', 'kilo', 'lima', 'mike',
'november', 'oscar', 'papa', 'quebec', 'romeo', 'sierra',
'tango', 'uniform', 'victor', 'whiskey', 'xray', 'yankee',
We want to create a data-structure for these words with a 1 on 64 false-positive probability. This means that we expect that we will be able to check the whole english dictionary against it, and about 1 word every 64 will result to be present even if it’s not.
We compute a single hash key for each different element, as integers in the range [0, N*P). Since N=26 (the length of the NATO alphabet) and P=64, the range is [0, 1664]. As in the case of Bloom filters, we want this hash to be uniformly distributed across the domain, so a cryptographic hash like MD5 or SHA1 is a good choice (and no, the fact that there are pre-image attacks on MD5 does not matter much in this scenario). We will need a way to convert a 128-bit or 160-bit hash to a number in the range [0, 1664], but the moral equivalent of the modulus is the enough to not affect the distribution. We will then compute the hash as follows:
def gcs_hash(w, (N,P)):
Hash value for a GCS with N elements and 1/P probability
of false positives.
We just need a hash that generates uniformally-distributed
values for best results, so any crypto hash is fine. We
default to MD5.
h = md5(w).hexdigest()
h = long(h[24:32],16)
return h % (N*P)
If we apply this function over the input words, we get these hash values:
[('alpha', 1017L), ('bravo', 591L), ('charlie', 1207L), ('delta', 151L),
('echo', 1393L), ('foxtrot', 1005L), ('golf', 526L), ('hotel', 208L),
('india', 461L), ('juliet', 1378L), ('kilo', 1231L), ('lima', 192L),
('mike', 1630L), ('november', 1327L), ('oscar', 997L), ('papa', 662L),
('quebec', 806L), ('romeo', 1627L), ('sierra', 866L), ('tango', 890L),
('uniform', 1134L), ('victor', 269L), ('whiskey', 512L), ('xray', 831L),
('yankee', 1418L), ('zulu', 1525L)]
Let’s now just get the hash values and sort them. This is the result:
[151L, 192L, 208L, 269L, 461L,
512L, 526L, 591L, 662L, 806L,
831L, 866L, 890L, 997L, 1005L,
1017L, 1134L, 1207L, 1231L,
1327L, 1378L, 1393L, 1418L,
1525L, 1627L, 1630L]
Remember that the range was [0, 1664). Given that we used a cryptographic hash, we expect these values to be uniformly distributed across that range. They look like it at glance, and obviously it gets much better with real-world data sets which are much larger. If we plot these values, we can double-check the distribution:
We now want to compress this set of number in the most efficient way. General purpose algorithms like zlib are obviously the wrong choice here, since they work by finding repetition of strings, and the 16-bit or 32-bit encoding of the above numbers would look like random data to zlib. Some compression theory comes to the rescue: the best way to compress an unordered uniform data set is to compute the array of differences, which will be a geometric distribution, and then use the Golomb encoding. Did I lose you? Let’s see it one step at a time.
If we compute the increments (differences) between a uniformly distribute set of values, the result is a geometric distribution. Recall that we originally decided for a range of exactly 26*64, and then we picked 26 uniformly distributed values within it. If you were to bet on the most likely distance between a value and the next one, wouldn’t you say “64”? Yes. And we can argue that most distances are going to be numbers pretty close to the value 64, and far larger values are extremely unlikely. This intuition matches with the geometric distribution (whose correspondent in the continuos domain is the exponential distribution).
In our example, this is the array of differences:
[151L, 41L, 16L, 61L, 192L, 51L, 14L, 65L, 71L, 144L,
25L, 35L, 24L, 107L, 8L, 12L, 117L, 73L, 24L, 96L,
51L, 15L, 25L, 107L, 102L, 3L]
Again, we can check this with a little plot (after sorting them):
The parameter p of this geometric distribution should be exactly the false probability we chose above (1/64). To double-check, we can estimate the parameter p by dividing the number of values by their sum: 26 / 1438 = 0.0175320, which is close enough to 1 / 64 = 0.015625. Again, with a larger input set, the numbers would be even closer.
We now want to compress this set of differences with Golomb encoding. As Wikipedia says, “alphabets following a geometric distribution will have a Golomb code as an optimal prefix code, making Golomb coding highly suitable for situations in which the occurrence of small values in the input stream is significantly more likely than large values”. In fact, we are going to use simplified sub-case of Golomb encoding, in which the parameter p is a power of 2 (like 64 is, in our case). This sub-case is called Rice encoding.
Back to the intuition: we are going to compress values which are very likely to be as small as the value 64, and very unlikely to be much bigger; 128 is unlikely, 192 is very unlikely, 256 is very very unlikely, and so on. Golomb encoding splits each value in two parts: the quotient and the remainder of the division by the parameter. Given what we just said about the likeness, you must expect the quotient to be likely 0 or 1, unlikely to be 2, very unlikely to be 3, very very unlikely to be 4, etc. On the other hand, the remainder is probably just a random number we can’t infer much about, it’s the high frequency oscillation which is impossible to predict. Golomb (Rice) coding simply encodes the quotient in base 1 (unary encoding) and the remainder in base 2 (binary encoding). Unary encoding might sound weird at first, but it’s really simple:
Number Unary encoding
So we emit as many 1s as the number we want to encode (the quotient) followed by a zero. Then, we emit the binary encoding of the remainder using exactly 6 bits (since it will be a number between 0 and 63). Thus, a number between 0 and 63 will be exactly 7 bits long: 1 bit for the quotient (0) and 6 bits for the remainder. A number between 64 and 127 will be 8 bits long (quotient 10, plus the remainder); A number between 128 and 191 will be 9 bits long (quotient 110); and so on. Smaller numbers are as compact as possible, higher and unlikely numbers gets longer and longer. Let’s see our array of differences properly encoded:
Number Quot Rem Golomb encoding
151 2 23 110 010111
41 0 41 0 101001
16 0 16 0 010000
61 0 61 0 111101
192 3 0 1110 000000
51 0 51 0 110011
14 0 14 0 001110
65 1 1 10 000001
71 1 7 10 000111
144 2 16 110 010000
25 0 25 0 011001
35 0 35 0 100011
24 0 24 0 011000
107 1 43 10 101011
8 0 8 0 001000
12 0 12 0 001100
117 1 53 10 110101
73 1 9 10 001001
24 0 24 0 011000
96 1 32 10 100000
51 0 51 0 110011
15 0 15 0 001111
25 0 25 0 011001
107 1 43 10 101011
102 1 38 10 100110
3 0 3 0 000011
And if we concatenate all the output, we get our final Golomb-coded set of numbers:
11001011 10101001 00100000 11110111
10000000 01100110 00111010 00000110
00011111 00100000 01100101 00011001
10001010 10110001 00000011 00101101
01100010 01001100 01010000 00110011
00011110 01100110 10101110 10011000
197 bits (25 padded bytes) to encode 26 arbitrary-long words with a 1.5% of false positives. That’s 7.57 bits per word. Not bad! The theoretical minimum number of bits was 26 * log2(64) = 156, so we’re still a little off in this example, but still better than an optimal Bloom filter which would require 225 bits. The example I chose is obviously too small and thus it’s very impacted by the specific words and the output of the MD5. I ran the same algorithm over a 640K-words English dictionary with expected false probability 1/1024, and I got a 7,405,432 bits GCS, which is about 11.58 bits per word. An optimal Bloom filter for the same set would take 9,227,646 bits, while the theoretical minimum would be 6,396,530 bits. Quite an improvement, in fact.
Decompression and query improvements
So how do we now query for a word to see if it’s in the set or not? We just need to reverse all the steps.
We start going through the bits. We extract the quotient Q by simply counting the number of consecutive 1s before the terminating 0; then we extract the fixed-size remainder R (exactly P bits, 6 in our example). We compute the original difference (Q*P+R), and we accumulate it into an integer so that we regenerate the original sorted set of hash values, one element at a time. We don’t need to actually expand the whole set in memory: as we go through the bits and compute one hash value at a time, we can compare it with the one that we are being queried for, to see if there’s a match.
After you reverse the encoding and difference steps, the underlying hash value set is sorted. So going through the set in linear order sounds like slower than it could be. One would think of doing a bisect search, but there is no easy way to jump to an arbitrary index in the encoded set, since the encoded elements have different size in bits, and they can be decoded only one at a time in linear order.
What we can do to improve query time a bit is to compute an index that allows to seek within the encoded set. For instance, if you want to make the query time 32 times faster, you can split the original domain [0…N*P) in 32 subdomains of equal size N*P/32. Then, for each subdomain, you find the smallest hash value that is part of the subdomain, and save its bit-index in the encoded GCS within the index. Since the hash values are uniformly distributed, each subdomain will contain roughly N/32 values, so by seeking into it while querying you will need to decode only 1/32th of the whole GCS, thus getting the 32x speed increase in query time. In the full English dictionary example cited above, an additional index made of 32 indices of 32-bits each is just 1,204 bits of additional memory; compared to a 7 million bits GCS, it’s a good deal to obtain a 32x speed increase in query time!
Play with the code
The full code is available on GitHub. I wrote both a Python and a C++ implementation that you can compare. The Python code is meant to be really simple and not really optimized (e.g.: it even streams the set from the disk when you do a query). The C++ is a little bit more optimized, though still mostly an academic example. Notice that the code does not implement the index to speed up decompression. Even if it’s good deal, it’s obviously not mandatory and just an optimization.
The algorithms behind OTP tokens
A friend asked me some time time ago how his bank’s OTP token worked. Most tokens that banks use (at least in Italy) are products of the “RSA SecurID” family, which are proprietary and secret (and rumored to have been compromised), but the general cryptography behind them is well-known and there are open standards that can be easily deployed by companies that want to add an extra layer of security. An emerging standard for OTP generation is called OATH, and given the general availability of free implementations, I will detail its algorithms in this post.
Some background on CSPRNG
OTPs (one-time passwords) are based on the concept of a so-calledcryptographically-secure pseudo-random number generator (aka CSPRNG). As many programmers know, pseudo-random generator (as found in the standard library of most programming languages, such as C’s rand()) are algorithms that generate a repeatable sequence of numbers that are “random looking”; there are several way to measure how random a sequence is, but the important property that differentiate PRNG from CSPRNG is not really concerned with randomness per-se, but rather with how easy is to predict the next number just by looking at the previous ones. This is important in the OTP context because obviously an attacker might get to know the previous numbers generated by the system (eg. through a key-logger installed on user’s computer), so it is paramount to make sure that he cannot exploit this knowledge to generate the next number.
Once we have chosen a suitable CSPRNG, it is sufficient that the server and the client agrees on the “seed” to be used for the sequence; in fact, both sides will be able to independently generate the same sequence and thus for the server it will be easy to check if the numbers generated by the client are the same that it generates by itself. As long as the algorithm is truly secure and the “seed” is not leaked, the ability of generating the next number can be considered a good authentication mechanism, since nobody else should be able to generate the same sequence. By embedding the seed onto an off-line physical device (the token), leaking the seed becomes almost impossible. If the device is stolen, it is sufficient to revoke it and assign a new device to the user (with a new “seed”).
I’m putting the word seed between quotes because it does not tell all the truth. Any PRNG algorithm works by keeping an internal state; when the next number is requested, the algorithm manipulates (changes) the internal state in some way, and then produces a number as result of this computation. In the simplest of all random algorithms (the same used by many implementation of C’s rand()), the internal state is simply the previous number being generated, but this is obviously not good for a CSPRNG. What we want is a way to generate a number from an internal state in a way not to leak any information (if possible at all) about the internal state. The “seed” is just a way to initializes the internal state, but if the PRNG is not secure, it will eventually leak enough information to let an attacker reconstruct the internal state, even if the seed itself was never leaked.
Getting to the code
So, how do we make sure that we can generate a number from a state without leaking information? What we are looking for is called a one-way function. Luckily, in cryptography, we have plenty of one-way functions available: they are called “secure hash functions”, MD5 and SHA1 being the most popular. If we decide that we trust SHA-1 as a good one-way function (that is, we accept that our CSPRNG be as strong as SHA-1, and broken as soon as SHA-1 will be broken), then we don’t need a complex internal state: we can simply use a progressive counter for that. SHA-1′s properties in fact already guarantee that, given SHA1(N), there is no way to reconstruct N; and for every N’ very “similar” to N (eg: obtained by bit-flipping, or a simple increment), there is no correlation at all between SHA1(N’) and SHA1(N).
So, the sequence SHA1(0), SHA1(1), SHA1(2), etc. can be considered a CSPRNG. But it is just one sequence; it’s true that can be arbitrary long, but how can we obtain different sequences for different users? The first solution that comes to mind is to use the same technique that is used to differentiate hashes of the same password: a salt. If we assign each user a random salt S, we can prefix it to the counter and obtain a unique sequence by computing SHA1(S + N). Let’s see this at work with some basic Python:
from hashlib import sha1
def OTP(salt=None, n=0, digits=6):
if salt is None:
salt = os.urandom(8)
print "Salt:", salt.encode("hex")
hash = sha1(salt + repr(n)).hexdigest()
num = long(hash, 16) % (10**digits)
yield "%06d" % num
n += 1
o = OTP()
which produces an output just like this:
Up until now I have used the word “salt” but this is in fact incorrect. The term salt is supposed to indicate a random string of bytes which is generated for each invokation of the hash/cipher. In the CSPRNG case, instead, the random data is generated once per user, and reused through the life of the sequence. Together with the counter, this “salt” is actually the secret state of the algorithm. The correct term for that string is “secret key”; if it is leaked, the CSPRNG is basically broken (since recovering the current counter is just a matter of generating the whole sequence up until the match).
The real OATH (RFC 4226)
OATH is basically the above algorithm, with only a few variations to make it more secure in face of future attacks to the underlying SHA1 hash function:
- “Mutating” a hash through a secret key is a common need in cryptography, and there is a stronger algorithm to achieve it: HMAC. HMAC-SHA1(S,N) is basically the stronger version of SHA1(S+N), with S fixed and N changing. OATH uses HMAC-SHA1.
- A longer (20 bytes) secret key is used, as required/suggest by HMAC-SHA1.
- The counter is always converted to a 8-byte string (its big-endian binary representation). The above code was using repr as a simple way to obtain a string from a counter.
- To extract the final 6 digits that form the OTP, we convert the whole SHA1 digest (20 bytes) into a number and then we calculate the modulo with 1,000,000. This means that we basically use the last few bytes of the SHA1 digest and discard the rest. This is not a problem right now because SHA1 is still secure, but to minimize risks OATH describes a way to extract 6 digits taking all 20 bytes into account. The exactly details can be read in the specifications.
Even with the above changes, OATH is quite easy to implement, and in fact there exist many different free implementations. For instance, you can download a generic OATH client for both iOS and Android, which are very good substitutes for a hardware token (with just the inconvenience that it is much much easier to extract the secret key in case an attack has physical access to the device for a while).