"AWJTSSK JQS GBQKWSK LYMSE EJBCG SPEC QPFYEYQD
MYHGC PRPYC JWKSWE CWI PQTGJW EPFS VBSM
AWAJASTCE HJJS.
BLACKCAM."
Flush with oil, the Texas Ranger sailed slowly up the great Hudson river, headed for the port of Albany. Except, the letters plucked from the ether revealed, it wasn’t the Texas Ranger, but the Holmswood, and there was no oil in its tanks, but 20,000 cases of bootleg liquor. The codes slowly revealed an incredible story: of the two great empires of bootleg, run by Chester Hobbs and his sidekick Tony ‘The Hat’ Cornero, and its rival, Consolidated Exporters, owned by the elegant businessman Henry Reifel and financed by a colourful Boston magnate, Joseph Kennedy—the father of a future President of the United States of America.
“Even though I have never seen your codebook,” the great cryptanalyst Elizabeth Friedman quietly noted in her diary in 1929, as she poured through the reams of coded radio intercepts that coast guard officials brought to her desk, “I may read your thoughts.”
The brewing showdown between the Indian government and WhatsApp this week over end-to-end encryption, is in key senses the child of the war-without-end fought by Friedman and her colleagues. For millennia, codebreakers, or cryptanalysts, have battled codemakers, or cryptographers—the one seeking to draw a veil over the inner thoughts of criminals and governments alike; the other to rip the curtain apart. Now, though, that struggle is at a decisive historical crossroads.
Little noticed, the battle between New Delhi and WhatsApp is mirrored by similar struggles in the United States and Europe, where legislatures and governments are also seeking to impose limits on end-to-end encryption—which gives ordinary citizens privacy, but also grants unprecedented secrecy to criminals like those Friedman fought.
Even with a supercomputer, by one estimate, it would take 1.02 x 1,018 years, a billion times a billion, to crack a single Advanced Encryption Standard digital key. In the not-too-distant future, quantum computing will make even this encryption easy to break—but also make theoretically-unbreakable encryption possible. This is a new world, and there is no road-map to navigate it.
The choice seems, at first, like a simple one: who wouldn’t, after all, want law enforcement to be able to combat child pornographers or terrorists? Who wants inflammatory rumours and misinformation to circulate freely? The global struggles on the issue, though, make clear just how complex the issues are—and how dangerous missteps might be.
Fears over the misuse of encryption—long voiced in the law-enforcement community—crystallised in 2015, when Syed Rizwan Farook and his wife Tashfeen Malik shot dead 14 residents of the town of San Bernardino, in an attack believed to be inspired by the Islamic State. The Federal Bureau of Investigation (FBI) wanted Apple to give them access to Farook’s iPhone 5C. Apple argued it had no means to do so; there was no “backdoor” built in to access its heavily encrypted data.
Tim Cook, Apple’s CEO, is reported to have ferociously resisted pressure from the FBI to introduce a digital backdoor that would give law enforcement a way into the advanced encryption used by the company’s phones to protect their content. His logic was simple. A backdoor built to facilitate law-enforcement operations against narcotics traffickers would, inexorably, also be exploited by hackers perpetrating banking or credit card fraud.
As the Electronic Privacy Information Centre’s Alan Butler has observed: “You cannot build a backdoor that only law enforcement can access. That’s not how encryption works.”
Last year, three influential United States senators—Lindsey Graham, Tom Cotton and Marsha Blackburn—introduced legislation seeking to compel companies to ensure law enforcement has access to encrypted information. The legislation is unlikely to pass in present form; indeed, another legal amendment seeks to grant companies immunity from liability for the misuse of end-to-end encryption.
For its part, the FBI appears to have found technical means to hack strong encryption—at least on devices already in its possession—but the debate continues to rage on.
Europe, for its part, has been seeing similar debates, again law-enforcement driven. In 2015, following a terrorist attack, police in Austria demanded lawful means to monitor encrypted communication; those calls have grown. Last year, a European Union draft council resolution asserted means were needed “to ensure the ability of competent authorities in the area of security and criminal justice, e.g. law enforcement and judicial authorities, to exercise their lawful powers, both online and offline.”
Four technical means were proposed in a European Commission discussion paper on preventing the use of encrypted chat to distribute child pornography—but the principles are similar irrespective of the kind of content that governments seek to restrict.
The first involves installing artificial intelligence tools on a user’s client—for example, a smartphone or computer—to scan for illegal content, and bar it from being encrypted. In theory, such a tool might detect prohibited categories of material, like pornography. The technology, though, has proven to be prone to false positives, and incapable of judging context: in 2016, for example, Facebook AI targeting nudity infamously blocked the iconic Vietnam War ‘Napalm Girl’ photograph.
A second possibility involves storing a list of prohibited hash-values, periodically updated, on the client. Every body of data, whether a single file or an entire disk, generates a unique digital fingerprint, known as a hash. If governments make a list of prohibited files, which companies like WhatsApp must enforce, a user’s client could block them from transmitting it, or automatically transmit this data unencrypted, and thus usable in evidence.
Even though this method would preserve end-to-end encryption for legal content, it isn’t without its problems. For one, users would have to trust manufacturers to believe only illegal content was being searched for on their clients. More important, it would only block content that had already been identified as illegal, making it of limited value in detecting or interdicting new material.
Perhaps most important, hash-based filtering is easy to bypass: a pornographic clip with a second of content deleted, for example, will generate a different fingerprint from the original. These blocks would thus be easy to bypass.
A third proposed method consists of scanning the hash-values on a server controlled by the company that runs the encrypted network. This allows for more thoroughgoing protection against illegal material. For example, an operator could generate hash-values for all possible inputs to an illegal form, and preempt its encrypted transmission.
The problem, though, is that the same operator could also scan traffic for legal messages, and send some in plain text without users ever becoming aware that this had been done. For all practical purposes, this would mean the whole point of end-to-end encryption had been defeated.
A variant of this proposal, fourth, has the task of matching hash values left not to the operator, but to an independent, trusted third party. This, however, simply shifts the burden of trust; it does not solve the problem itself.
In a 2019 paper, scholars Nirvan Tyagi, Ian Miers and Thomas Ristenpart proposed a system which would allow trackback of malicious content to its originator, without compromising encryption. “Like most technologies for content moderation,” they however concede, “the same tools are at risk of use for silencing whistleblowers, activists, or others producing socially valuable content. Once the source account of a message is identified, they may face reprisals. We therefore believe care should be taken when deploying tracing schemes lest they themselves become abused by authoritarian regimes or others.”
These proposals do not, moreover, meet the most obvious problem: criminals and terrorists, with only a small level of technological competence, can easily create end-to-end encrypted communications channels of their own. Thus, the proposed legal reforms might strip ordinary citizens of their protections, while doing nothing to hurt criminals. Indeed, there are already multiple cases of criminal cartels deploying such tools.
Like so much else in history, technology will likely prove itself to be resistant to the will of nation-states: legislation might well compel companies like WhatsApp to weaken their encryption, or engineer trackback into it, but these efforts may prove Sisyphean. Technology will continue to provide easy means to those who seek it—whether criminals, dissidents or just citizens who want to share porn—to evade monitoring by the state—to evade surveillance and censorship.
The debate over end-to-end encryption, at its core, isn’t just about crime: It rests on the belief that there is a tech-fix for social behaviours like scurrilous gossip, inflammatory rumour or pornography. The advocates of these ideas should think again. In 1929, Friedman defeated every cryptographic tool crime cartels could invent—but lost the war, because colossal numbers of Americans just wouldn’t give up drinking, no matter what their government wanted.