Protecting children by scanning encrypted messages is ‘magical thinking’, says Cambridge professor

by -565 Views

Governments are in danger of turning to “magical software solutions” to fight child abuse and terrorism, rather than investing in police, social workers and teachers who can deal with the underlying causes, a Cambridge academic claims.

Professor of security engineering Ross Anderson has argued in a paper that governments should view the child safety debate from the point of view of children at risk of harm, rather than from the perspective of the intelligence agencies and firms that want to sell computer software.

The paper is a response to arguments made by two senior directors of GCHQ that client-side scanning software placed on mobile phones and other electronic devices could be deployed to detect illegal activities, such as grooming, without disrupting individuals’ privacy and security.

In their discussion paper Thoughts on child safety on commodity platforms, Ian Levy, technical director of the UK National Cyber Security Centre (NCSC), and Crispin Robinson, technical director for cryptanalysis at GCHQ, said it was not necessary for society to choose between making communications “insecure by default” or creating “safe spaces for child abusers”.

In a 19-page rebuttal, Anderson, a professor at Cambridge and Edinburgh University, claimed the idea of using software to detect child abuse, terrorism and other illegal activities would prove unworkable in practice and would fail to address the underlying problems.

Anderson was one of 14 leading computer and cryptography experts to author a previous paper, Bugs in our pockets: the risks of client-side scanning, which argued that client-side scanning technology “creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic”.        

Cambridge and Edinburgh professor Ross Anderson claims using software to detect child abuse, terrorism and other illegal activities would prove unworkable in practice and would fail to address the underlying problems

“The idea of using artificial intelligence to replace police officers, social workers and teachers is just the sort of magical thinking that leads to bad policy,” he argued in a new discussion paper Chat control or client protection?

Governments have historically tried to limit the use of cryptography to protect their surveillance capabilities. During the Cold War, this led from imposing export controls to proposals to give law enforcement exceptional access to encryption keys in the 1990s, breaking encrypted services in the 2000s and demanding that Apple break iPhone cryptography in 2015. The justification swung from child protection in the 1990s, to terrorism after 9/11, and back to child protection.

Anderson pointed out that current proposals by the European Commission to introduce measures known as chat control to scan the contents of messages sent through encrypted messaging services, such as WhatsApp, have generated significant pushback from industry and civil society groups.

There is widespread concern that moving to edge computing and end-to-end encryption will lead to “a government search engine in all our devices”, says Anderson.

Language modelling is error-prone

Levy and Robinson proposed “language models running entirely locally” on a mobile phone or a computer to detect “language associated with grooming” in their paper, an idea that has now appeared in draft UK and European Union (EU) laws.

But the use of natural language processing (NLP) models to detect illegal speech – whether sexual grooming, terrorist recruitment or hate speech – is highly error-prone, Anderson argued. Analysing text content alone, it is difficult to achieve error rates significantly below 5-10%. The European Commission assumed that it might get 10% false positives in an internal document presented to the European Council in June. This would mean a billion false alarms would have to be manually processed each day.

There is only one trial scanning tool available, created through Microsoft’s Project Artemis which is supplied through Thorn, a private US company. There is no independent assessment of the technology’s effectiveness and Microsoft’s claims about it are “notably weak”. Anderson argued that it was difficult to see how anyone could trust a natural language text scanning tool that was trained on data to which the public has no access.

There are too many ways that machine learning can be subverted – for example, by manipulating the inference engine, the training data or its labelling. It is entirely unclear, he said, how mandatory remote scanning of messages on user devices could ever be “squared with the human right to privacy and the established ban on mass surveillance without warrant or suspicion”.

Manual moderation of false positives not feasible

Tech companies need to deal properly with complaints from users but frequently fail to do so because employing human moderators is expensive, said Anderson. 

They are willing to act on takedown requests from police and from copyright lawyers because ignoring them can be expensive. But they do ignore requests from ordinary users, including women and children, who may wish to report abuse. “That needs to be fixed, whether by criminal sanctions or by significant financial penalties,” the paper stated.

If natural language processing technology suggests there is a 60% probability of a child being groomed, this passes the civil threshold for liability if a technology platform fails to act. If costs were not an issue, a human moderator would be able to call the potential victim, and if the victim agreed that grooming had taken place, an investigator could work with them to preserve forensic evidence.

But human intervention costs money, and given the current state of the law and industry practice, it is more likely that the suspect, and potentially their family and friends, would have their accounts blocked.

Levy and Robinson’s claim that false positives from machine learning algorithms can be dealt with by human moderation is “simply wrong”, Anderson argued. This is because law enforcement agencies and technology companies have little or no liability for the human cost of false positives.

That was the case when a father took a photograph of his young son, at the request of a nurse. The image was uploaded automatically by his phone to Google. He was visited by the police and, despite being cleared, has been unable to gain access to his Google accounts.

If large numbers of accounts are referred for moderation, either because of false positives or children sending “sexting” messages to other children, then a significant number of young people may lose access to their digital lives.

Evidence for benefits of electronic surveillance is weak 

Government ministers in the UK have criticised Facebook’s plans to move Facebook Messenger to end-to-end encryption.

Anderson said it was possible that arrest numbers might fall if Facebook Messenger introduced end-to-end encryption, but the evidence to support that is weak, as the number of people arrested or interviewed for image offences has remained constant despite the proliferation of end-to-end encryption services.

Evidence from Germany suggests that electronic surveillance has not helped law enforcement in practice. The country collected communications data between 2008 and 2010, stopping when the practice was found to be unlawful. But statistics show that police access to the data in Germany did not have a deterrent effect, did not assist in clearing up crimes and did not increase convictions.

“When governments argue that privacy must be impaired once more in the hope of arresting more abusers, some scepticism is called for,” Anderson argues.

Better methods of crime prevention

Levy and Robinson’s focus is on child abuse, but Britain’s Online Safety Bill would also empower authorities to impose server-side scanning of text for terrorist radicalisation and recruitment.

The history of function creep in surveillance suggests that client-side scanning for terrorist content would be the next logical step, if it is permitted for child protection. However, research shows there are strong links between violent political extremism and violence or misogyny against women.

“Where ministers buy the idea of a magical software ‘solution’, as the industry likes to call its products, the outcomes are often disappointing and sometimes disastrous”
Ross Anderson, Cambridge and Edinburgh Universities

Rather than using bulk interceptions to look for young men who “express an opinion or belief supportive” of a prescribed group or download a copy of The Anarchist Cookbook, the police need to pay more attention to violence against women and children, says Anderson.

“If there are 200 youths hanging out in a mosque with a fiery Salafist preacher, the man to watch is not the one who downloaded the [Anarchist] cookbook, but the one who beat up his sister,” the paper argued.

Anderson claimed client-side scanning was not the best solution if the policy aim is to prevent abuse to children.

Violent crime against children is associated with deprivation, unstable families living in slums, structural unemployment, despair leading to alcohol and drug abuse, and gangs providing role models, he said.

Better ways to tackle the problem would be to increase child benefit and increase the number of child social workers.

Police need to be able to focus on the primary problem, which is preventing and prosecuting crimes against children and political violence.

Good policy requires government to coordinate multiple bodies in the state and private sectors by working to get the incentives right. For police and social workers, protection work is a hard and thankless task, and it is easy to be distracted by technical solutions, the paper argued.

“The idea that complex social problems are amenable to cheap technical solutions is the siren song of the software salesman and has lured many a gullible government department on to the rocks,” wrote Anderson.

“Where ministers buy the idea of a magical software ‘solution’, as the industry likes to call its products, the outcomes are often disappointing and sometimes disastrous,” he said.

Sumber: www.computerweekly.com

No More Posts Available.

No more pages to load.