Of course a chat scanner is useless! Whoever keeps a tab about encryption or privacy on the web knows that. So that’s settled.
Uh, sorry? The European Union wants to force the scanning of every private message, picture or video we exchange on messaging applications? That will protect our children against abusers, they say? Aaaaaawwwwww maaaaaaan, here we go again.
Dear EU, listen to me very carefully. It. Will. Not. Work.
TL;DR; If someone needs to hide something, he can encrypt it outside the messaging App. And you won’t be any wiser. On the other hand, the chat scanner will trump legitimate Human Rights and create endless annoyances to users.
That should be clear enough. But if you still don’t get it, read what follows.
The right to a private conversation
Once upon a time, chats would essentially behave like a forum with some authorization on top of it. They didn’t feature any encryption. They were nice, but people with the right accesses could read your messages. Persons like system administrators, or hackers.
Then someone thought privacy was a Human Right. Hey, that’s you, European Commission! Thank God, encryption allows us to enforce that privacy on messaging systems. Why? Because with end-to-end encrypted chats, only the endpoint (the persons chatting) can decrypt the exchanged information. When done well, even system administrators cannot read your secret stuff. Now that’s what I call privacy!
Of course, some service providers cheat so that they can secretly access your stuff. Bad providers, bad! But globally, end-to-end encryption protects our Human Rights. So cool!
So… why does the European Commission want to break that?
The Chat Scanner Is Useless
Guess, what: Google tried chat scanners already. They are not working. Apple have them too, and they caused quite a stir among… child protection experts!
Truth is, this scanning system not only will bring a heap of false positives: it will undermine the whole concept of end-to-end encryption. To understand that, you must understand how the chat scanner works. When you send a picture, your messaging App would first hash the picture. In short, it generates a fingerprint of that picture. Then it asks the server “does that fingerprint match anything from a database of child exploitation imagery fingerprints”. If it matches a list of “forbidden fingerprints”, the system can block the image.
Yay?
It Breaks Encryption
Hold on, how do you build that database of forbidden fingerprints? You take a bad picture, you hash it and you add it to the database. Right, but what if someone adds a picture of, say, the Statue of Liberty in there? Well I fix the database: I open the list of fingerprints in there and… ah… uuuh. There it is: I can add pictures to have a list of fingerprints, BUT, I cannot see what picture the fingerprint matches to! So how can I fix the database?
Fact is, you cannot distinguish a hash of a despicable picture from a hash of a legitimate one. What if a government orders service providers to add a series of hashes, without providers knowing what picture the fingerprints refer to? We get censorship! Che Guevara? Nein! L’origine du monde? Nein, Nein. A picture of Obama? What if those pictures DO become prohibited by the hash/fingerprint database?
That’s scary stuff!
It gets worse: what if the server could actually know what image generated the hashes in that database? Then the server (and the persons managing it) could know what image you exchanged! Maybe not a photo you took (then again), but it could identify a well-known picture of Sasha Baron Cohen. It knows you exchanged that fingerprint, so it knows you discuss Sasha Baron Cohen. It has effectively broken encryption.
You could apply the same principles to text messages. A scanner could check hashes of some words against a database, to see whether the user is using red-flagged words. That means not only Google, Apple, Facebook will know which pictures you exchange, they will also know what you’re discussing about. End-to-end encryption is broken again. Will they be clever enough to distinguish between “That tune is da bomb!” and “I’m building the bomb”? I have doubts about it.
Really, It Is Useless
But say that you, dear European Commission, finds the ultimate way to keep these hashes unidentifiable… You want to stop child abusers, right? Well, you won’t be able to.
Because anyone who wants to hide something just needs to encrypt it… before sending it with a messaging system! Any child abuser can go here or here, encrypt a whole load of despicable pictures, and send them as encrypted characters using WhatsApp. In fact, let’s just use any open forum, and put the stuff there! Encrypt your files, copy-paste the gibberish in there and voilà: anyone you’ve shared the decryption key will be able to get the content.
“That’s horrible, let’s take down those websites that allow you to encrypt files”. Ok then. I’ll use one of these offline tools to encrypt and decrypt files from your desk. And none’s the wiser.
So you see Dear European Commission, the chat scanner is useless indeed. It will mainly bother legit users who want to have a legitimate, private discussion. As for child abusers, they will happily keep on exchanging their content without being bothered. And yes, I do agree it’s a horrible state of affairs. But chat scanners won’t solve it.
Featured Image by Maurisa Mayerle from Pixabay