A piece of open source software running on Alice's computer exchanges keys with a piece of open source software running on Bob's computer. Later Alice and Bob exchange messages encrypted with those keys through Charlie's server.
Eve, a police officer has evidence that Alice and Bob are messaging each other about crimes and obtains a warrant to require Charlie to intercept their communication. Charlie has no ability to do so because it is encrypted with keys known only by Alice and Bob.
If you want a different result, someone has to proactively change part of this process. Which part should change?
One option is to mandate that any encrypted messaging software also give a key to the government or the government's designee, but someone using open source software can modify it so that it doesn't do that, which would be hard or impossible to detect without a forensic search of their device.
Another option is to mandate that a service provider like Charlie's only deliver messages after verifying that it can decrypt them. This, too is hard to enforce because users can layer additional encryption on top of the existing protocol. Signal's predecessor TextSecure did that over SMS.
Both of those options introduce a serious security vulnerability if the mechanism for accessing the mandatory escrowed keys were ever compromised. Would you like to suggest another mechanism?
About the only thing I can think of is to mandate the use of (flawed) AI to identify messages that seem nonsensical and refuse to pass them, and then to play a game of Chinese-style DPI whack-a-mole in an attempt to suppress open alternatives.
If you have the ability to run custom software—even if it’s a bash script—you can develop secure alternatives. And even if you somehow restrict open source messaging, I can just use good old pen-and-paper OTP to encrypt the plaintext before typing it in, or copy/paste some other text pre-encrypted in another program. But even then, all this will do is kick off a steganographic arms race. AI generated text where the first letter of each word is the cyphertext may be nearly impossible to identify, especially at scale.
If anything like this were to pass, my first task would be making a gamified, user-friendly frontend for this kind of thing.
A piece of open source software running on Alice's computer exchanges keys with a piece of open source software running on Bob's computer. Later Alice and Bob exchange messages encrypted with those keys through Charlie's server.
Eve, a police officer has evidence that Alice and Bob are messaging each other about crimes and obtains a warrant to require Charlie to intercept their communication. Charlie has no ability to do so because it is encrypted with keys known only by Alice and Bob.
If you want a different result, someone has to proactively change part of this process. Which part should change?
One option is to mandate that any encrypted messaging software also give a key to the government or the government's designee, but someone using open source software can modify it so that it doesn't do that, which would be hard or impossible to detect without a forensic search of their device.
Another option is to mandate that a service provider like Charlie's only deliver messages after verifying that it can decrypt them. This, too is hard to enforce because users can layer additional encryption on top of the existing protocol. Signal's predecessor TextSecure did that over SMS.
Both of those options introduce a serious security vulnerability if the mechanism for accessing the mandatory escrowed keys were ever compromised. Would you like to suggest another mechanism?
About the only thing I can think of is to mandate the use of (flawed) AI to identify messages that seem nonsensical and refuse to pass them, and then to play a game of Chinese-style DPI whack-a-mole in an attempt to suppress open alternatives.
If you have the ability to run custom software—even if it’s a bash script—you can develop secure alternatives. And even if you somehow restrict open source messaging, I can just use good old pen-and-paper OTP to encrypt the plaintext before typing it in, or copy/paste some other text pre-encrypted in another program. But even then, all this will do is kick off a steganographic arms race. AI generated text where the first letter of each word is the cyphertext may be nearly impossible to identify, especially at scale.
If anything like this were to pass, my first task would be making a gamified, user-friendly frontend for this kind of thing.