Encryption & the King of Denmark

An old envelope with a red wax seal

It is not very hard (but it is much harder than it should be) to set yourself up to be able to receive encrypted emails. I am using Mailvelope which adds encryption to Gmail.  The idea is that you publish a key (mine is here) which other people can use to encrypt a message to you; because you are the only person with the other half of the key, only you can decode the message.

I think it would be a good idea for most people to use encryption most of the time. This makes it less likely that information is accidentally released (such as your bank account details) and also makes it harder for the state (or other people’s states) to intercept our communications.  And even though a lot of what we send by email could safely be transmitted en clair, if we encrypt everything then that makes it harder for other people to find the stuff we don’t want them to see among all the ephemera. (It would be nice to say at this point that King Christian X of Denmark thwarted the Nazi order that all Jews should wear armbands with yellow stars by wearing one himself, but sadly that story isn’t true.)

But I know very few other people who are set up to receive encrypted emails, and hardly anybody ever sends me anything encrypted.

Why is that? It isn’t that difficult to set up GPG.  One reason is that there is apparently no way to go through all your contacts automatically and check if they have published a public key that you can use to send them messages.  So if my friends to have public keys, I would have to find that out manually.  Ideally, Mailvelope would automatically identify which people have a public key and encrypt messages to them, without me needing to intervene.

Even better, though, would be to build encryption into the email infrastructure of the internet. This would have three big advantages: it would mean all email is encrypted by default, so massively increasing the size of the encrypted haystack within which people would have to search for valuable information; it would make encryption frictionless for users; and it would mean that the email metadata (who the message is from, who it is to, the subject line) would be encrypted as well, making it impossible for authorities to collect all our email metadata as they do now.

The way it would work is this. Each domain would publish a public key for  its email server, as part of its MX record (that’s the information each domain already publishes so that we know where to deliver the mail).  The message transfer agent on the sending domain (eg SENDMAIL) would encrypt the entire message, including the metadata, using the public key of the recipient domain.  The recipient email server would decrypt the message on arrival (using its private key), so finding out which of their users the message is for and where it came from, conduct the usual spam checks, and then deliver it to the relevant user.  If we did this, then all the email traffic on the internet would be encrypted, with only the destination domain visible to anyone watching the traffic go by. Individuals who were not confident about the security of their email services could of course add their own layer of encryption on top as well, if they wish.

Until recently this level of encryption would have been computationally too expensive for senders and receivers; I don’t think it is now (perhaps someone can work out if this is right? This approach might also have the side benefit of adding a computation tax to computers sending large quantities of spam.

This seems a pretty straightforward way for the geeks to subvert the surveillance state. Can anyone tell me the flaw in this idea, please?

9 thoughts on “Encryption & the King of Denmark”

  1. I see no flaw, and am – in general – very sympathetic to the growing need to encrypt e-mail if we want any semblance of privacy. I think many people don’t realize how insecure e-mail is.(Or they fall to the next-best excuse, “it’s too much trouble + I have nothing to hide”…)
    I don’t follow your technical argument, and so apologies if my comment is off-point: but, if encrypted e-mail was so easy (i.e. just a technical solution away), why did we see encrypted e-mail providers like Lavabit fold under government pressure? I mean this as a genuine – rather than rhetorical – question. My understanding of the Lavabit closure was that it embodied Snowden et al.’s assertion that end-point security is weak, and the entry point of snooping. And that e-mail encryption attracts attention (I think Tor once warned about this?), which can then manifest itself in pressure on those end-points.
    Again, my technical understanding of digital security is VERY fuzzy, and so apologies if this doesn’t make any sense.

    1. Thanks Angela. Your question makes perfect sense.

      You are right that email encryption attracts attention: that is because it is so rarely used. If some version of my idea here were adopted, this problem would disappear because all email would be encrypted, and so encrypted mail would not stand out within it. (That is my point about the King of Denmark.)

      Lavabit’s problem was the the US government demanded that Lavabit should provide the key needed to decrypt information stored on its servers; and (to his credit) Ladar Levison said that he would prefer to end the business than to comply with this request. Clearly, what I have described would fall foul of the same problem: if the Government insists that service providers hand over the decryption keys, then no encryption is secure. But at the moment, the government does not usually need to force service providers to hand over encyrption keys, since most mail and all the metadata are not encrypted at all. My proposal here would mean that in future the government could only access our emails by forcing service providers to hand over the keys.


      1. Owen, I was about to make the same point you did in your first comment – your proposal still relies on hosting providers “averting their eyes” and being unwilling to share their private keys which, given the current legal mechanisms in place (as I understand them) in the USA is often unlikely.
        The overall effect could still be positive though
        This kind of encryption would not be computationally expensive on modern hardware (taking SSL / TLS as an example) but there is an additional “handshaking” overhead with key exchange / session set up that matters on websites but for asynchronous stuff like email it’s not an issue.
        I do see the appeal of “fixing the system” rather than “fixing user behavior” but I don’t think there is a  good option for a privacy-guaranteeing “system fix”. I’d still advocate people use mail clients that support end-to-end encryption and often even easier, use mobile / desktop instant-messaging services with OTR support and no logging (I do most quick friends & family communication like this)

  2. First, Angela: the technique Owen is describing is different from what Lavabit was using. With PGP (or its free implementation, GPG), there is no one master key to decrypt all emails.
    Owen: you’re missing the key part about encryption: validating the public key. For every person you want to encrypt communications with, you should first meet in person to share public keys. (This is what the “fingerprint” is for: human verification.) If you don’t meet in person, then the NSA can be a man in the middle: you encrypt to a planted key, and then the NSA decrypts the message and re-encrypts it using the target’s real key.
    The part where people share public keys is the dangerous part PGP. All PGP’s security hinges on it.
    The same applies to DNS. DNS is an extremely insecure protocol; it would be easy for the NSA to spoof an MX record like the one you propose.
    I think your general point is right: we need to make this easy. But security wonks aren’t keen to promote insecure practices — and for good reason. If we can find a secure way to share PGP keys (Google Hangouts!), things might get a lot better.

    1. Adam – thanks. That is an excellent point about “man in the middle” attacks.

      Is it possible that if each mail domain has a public key, it is easier to for service providers to validate each other’s public key than it is to rely on completely decentralised end-to-end encryption? Larry Page and Marissa Meyer could exchange their public keys in person and that would cover all Google and Yahoo users – wouldn’t that be more practical than all their users having to exchange their keys with each other individually? (Of course, individual users can do that too – these layers of encryption are additive).


  3. Owen, 
    Good news! Server to server encryption already exists, using SSL instead of PGP but fundamentally the idea is very close to yours. Although, if you are worried about government intrusion you also need to question the security of SSL certificates issued by 3rd parties(currently a cornerstone of SSL)
    http://en.wikipedia.org/wiki/STARTTLS   (the email command that enables TLS during the server to server communication phase)
    HOWEVER, the fundamental problem in this process is where you add and remove SSL. The further away from the user that you apply encryption the less the user can trust that it hasn’t been compromised. The Snowden leaks have shown us that the NSA is willing to target the entire infrastructure of a company in order to reach their target. The Google smiley face slide shows them attacking Google’s front end server in order to access messages before encryption is applied. There is little reason to believe other governments wouldn’t be willing to do the same.
    In the past I’ve worked with a few companies that require their major vendors enable TLS encryption in email or the server would reject the message. Mostly banks but also one American Tobacco company
    And yes, Cgdev.org does support TLS communications with other mail servers.

      1. Owen: again, you’re the voice of common sense and I’ll stand in as the voice of pragmatism 😉 :
        As Michael said, SSL is a more efficient way to encrypt communications between servers. To my knowledge those communications already are encrypted: it’s certainly doable, and Google has shown every sign of wanting to.
        But most email servers are run by small companies, and small companies’ IT people are bad at security. It needs to become easier, faster and cheaper to configure STARTTLS and sign a key. Until then, SSL won’t be for everybody.
        And of course, it has weaknesses. In effect, SSL centralizes public keys: when server A wants to encrypt emails to send to server B, the servers validate one another’s identity through an SSL certificate authority (like Verisign). But there’s an obvious flaw: if the certificate authority is hacked (like DigiNotar was), then the NSA can masquerade as the recipient (or sender) by signing a phony SSL key. A breach like this is hard to detect; for all we know, all the major CAs are already compromised.
        So SSL and PGP have the same problem: how can we validate identities? If we do it on a small scale (I and my friends build a circle of trust), it’s too much work. If we do it on a large scale (Google and Yahoo encrypt everything with SSL), it creates opportunities for massive security breaches.
        So it’s hard. And then there’s the legal angle. For any company along the chain, the government can (try to) subpoena anything it can read. So if Google encrypts all its emails, then the NSA just has to, erm, ask them to decrypt them. In Lavabit’s case, the government even subpoenaed the encryption key itself.
        The only blanket solution is education. Every email user needs to conduct a threat assessment: find out who might want what information, and how bad the damage would be if they got it. Then they need to learn techniques to plug up the attack vectors.
        The Snowden leaks have led to progress on two fronts. First, education: people have learned that their governments seem to be bent upon acquiring all their personal data, and many are starting to question why; hopefully many will educate themselves. Second, the leaks have clarified that courts are much too eager to issue overly broad and under-ly public subpoenas; hopefully legislators will react.
        But we have a long way to go between these advances and everybody installing a metaphorical lock on his or her data’s metaphorical home.

  4. Dear Owen
    Through my smartphone news feed I found a link to this blog today. Wonderful! It seems others have similar ideas. I read your suggestions in the first post as to how privacy could be guaranteed in email, and I agree. In fact, half a year ago (or when was it the Snowden revelations were out?), I threw aside another project and started working on just that.
    There are a couple of things to get right. End to end encryption of email requires both the transport and the endpoints to be secure.
    The endpoints, usually a user’s computer or phone, is a little tricky to guarantee privacy on, since, ultimately, a malicious screen grabber could send information on to anyone snooping on the conversation. And it takes some user discipline in regards to security – even if the content is encrypted on the device it resides on.
    The transport, however, is easier, related to standards and protocols, and the idea you describe is part of what I do. SSL might be compromised by NSA, and is therefore to my best guess insufficient. So I do a little more, adding several layers of encryption on the individual entities involved in the transport (usually just three, the sender, the recipient, and the recipient’s ISP where the mail will await final download in a mail server – as did LavaBit, Gmail etc.). And I also use extensions to DNS for distributing public keys.
    In LavaBit’s case, which I see being discussed above, the issue is really (or was, rather) that the email residing on his server, awaiting download, is encrypted with a static local key that the good man Levison is (was) carrying in his pocket, so to speak. This makes the ISP (LavaBit) vulnerable for calls by NSA and the likes, who can force cooperation (make a copy of the key, to stay with the metaphor).
    My way is different. The mail will be encrypted in several layers, and any intermediary storage (such as an ISP) will not have full access to content. It will have to, at least at some point, to have access to (some of) the metadata in order to route and handle the mail correctly. This could for example be handled only in memory, using public keys in a smart way, so that nothing stored on the ISP’s server is really useful for snoopers et al, thus to quite an extend taking the ISPs (the email service providers) out of the privacy-breaking equation. This I am doing.
    I agree wholeheartedly with your point, Owen, that if enough of the electronic correspondence on the Internet is encrypted, even only weakly encrypted, the time it takes for our so called “good guys” to decrypt messages and home in on the “bad guys” is considerably longer, and therefore they will have to focus on the real bad guys and hopefully leave benign but possibly interesting political or trade-related “targets” among their own citizens and allies (presidents) out. I would very much like to contribute to that, and I am working on it.
    Happy to see that others feel as I, and exhilarated that good ideas pop up all over the world in the wake of this, well, scandal. LavaBit and Silent Circle are probably working on similar approaches, judging from the new webpage for Dark Mail Alliance (http://darkmail.info).
    What is really needed in my opinion is a complete re-think of the IP protocol, exchanging it with a secure version where encryption and key exchange happens at the package transport level, perhaps with chaotic package distribution. People like Bruce Schneier and many other prominent people have been voicing something down that lane lately, and the W3C and IETF are working on it, I know. It will be a couple of interesting Internet tech years ahead, at least for a geek like me 🙂
    Cheers from snow-white Switzerland

Leave a Reply

Your email address will not be published. Required fields are marked *