The "right to be forgotten" online is in danger of being transformed into a tool of global censorship through a test case at the European court of justice (ECJ) this week, free speech organisations are warning. An application by the French data regulator for greater powers to remove out of date or embarrassing content from internet domains around the world will enable authoritarian regimes to exert control over publicly available information, according to a British-led alliance of NGOs. The right to be forgotten was originally established by an ECJ ruling in 2014 after a Spaniard sought to delete an auction notice of his repossessed home dating from 1998 on the website of a mass circulation newspaper in Catalonia. He had resolved his social security debts, he said, and his past troubles should no longer be automatically linked to him whenever anyone searched for his name on Google. The power to de-list from online searches was limited to national internet domains.
An unprecedented wave of rank-and-file rebellion is sweeping Big Tech. At one company after another, employees are refusing to help the US government commit human rights abuses at home and abroad. At Google, workers organized to shut down Project Maven, a Pentagon project that uses machine learning to improve targeting for drone strikes – and won. At Amazon, workers are pushing Jeff Bezos to stop selling facial recognition to police departments and government agencies, and to cut ties with Immigration and Customs Enforcement (Ice). At Microsoft, workers are demanding the termination of a $19.4m cloud deal with Ice.
Amazon's facial recognition technology falsely identified 28 members of Congress as people who have been arrested for crimes, according to the American Civil Liberties Union (ACLU). The ACLU of Northern California's test of Amazon's controversial Rekognition software also found that people of color were disproportionately misidentified in a mugshot database, raising new concerns about racial bias and the potential for abuse by law enforcement. The report followed revelations in May that Amazon has been marketing and selling the Rekognition technology to police agencies, leading privacy advocates to urge CEO Jeff Bezos to stop providing the product to the government. "Our test reinforces that face surveillance is not safe for government use," Jacob Snow, a technology and civil liberties attorney at the ACLU Foundation of Northern California, said in a statement. "Face surveillance will be used to power discriminatory surveillance and policing that targets communities of color, immigrants, and activists.
Microsoft has called for facial recognition technology to be regulated by government, with for laws governing its acceptable uses. In a blog post on the company's website on Friday, Microsoft president Brad Smith called for a congressional bipartisan "expert commission" to look into regulating the technology in the US. "It seems especially important to pursue thoughtful government regulation of facial recognition technology, given its broad societal ramifications and potential for abuse," he wrote. "Without a thoughtful approach, public authorities may rely on flawed or biased technological approaches to decide who to track, investigate or even arrest for a crime." Microsoft is the first big tech company to raise serious alarms about an increasingly sought-after technology for recognising a person's face from a photo or through a camera.
Two legal challenges have been launched against police forces in south Wales and London over their use of automated facial recognition (AFR) technology on the grounds the surveillance is unregulated and violates privacy. The claims are backed by the human rights organisations Liberty and Big Brother Watch following complaints about biometric checks at the Notting Hill carnival, on Remembrance Sunday, at demonstrations and in high streets. Liberty is supporting Ed Bridges, a Cardiff resident, who has written to the chief constable of South Wales police alleging he was tracked at a peaceful anti-arms protest and while out shopping. Big Brother Watch is working with the Green party peer Jenny Jones who has written to the home secretary, Sajid Javid, and the Metropolitan police commissioner, Cressida Dick, urging them to halt deployment of the "dangerously authoritarian" technology. If the forces do not stop using AFR systems then legal action will follow in the high court, the letters said.
Facial matching technology proposed by the government is racist and would have a chilling effect on the right to freedom of assembly without further safeguards, the Human Rights Law Centre has said. The warning is contained in a submission to a parliamentary committee inquiry examining the Coalition's proposal for the home affairs department to collect, use and disclose facial identification information. The facial matching system was agreed to in principle by states in October, but has since led to overreach warnings from Victoria and the Law Council of Australia. Concerned parties have warned the Coalition's identity matching services bill allows access to facial verification data by the private sector and local governments, and that it could be used to prosecute low-level crimes. In a submission to the parliamentary joint committee on intelligence and security, lodged on Tuesday, the Human Rights Law Centre warned the bill was "manifestly and dangerously insufficient" and the system was "high risk" because the bill failed to adequately identify or regulate the uses of facial matching technology.
Popular AI-powered selfie program FaceApp was forced to pull new filters that allowed users to modify their pictures to look like different races, just hours after it launched it. The app, which initially became famous for its features that let users edit images to look older or younger, or add a smile, launched the new filters around midday on Wednesday. They allowed a user to edit their image to fit one of four categories: Caucasian, Asian, Indian or Black. Users rapidly pointed out that the feature wasn't particularly sensitively handled: technology site The Verge described it as "tantamount to a sort of digital blackface, 'dressing up' as different ethnicities", while TechCruch said the app "seems to be getting a little too focused on races rather than faces". The company initially released a statement arguing that the "ethnicity change filters" were "designed to be equal in all aspects".
Joy Buolamwini is a graduate researcher at the MIT Media Lab and founder of the Algorithmic Justice League – an organisation that aims to challenge the biases in decision-making software. She grew up in Mississippi, gained a Rhodes scholarship, and she is also a Fulbright fellow, an Astronaut scholar and a Google Anita Borg scholar. Earlier this year she won a $50,000 scholarship funded by the makers of the film Hidden Figures for her work fighting coded discrimination. How did you become interested in that area? When I was a computer science undergraduate I was working on social robotics – the robots use computer vision to detect the humans they socialise with.