It is a technology that has been frowned upon by ethicists: now researchers are hoping to unmask the reality of emotion recognition systems in an effort to boost public debate. Technology designed to identify human emotions using machine learning algorithms is a huge industry, with claims it could prove valuable in myriad situations, from road safety to market research. But critics say the technology not only raises privacy concerns, but is inaccurate and racially biased. A team of researchers have created a website – emojify.info One game focuses on pulling faces to trick the technology, while another explores how such systems can struggle to read facial expressions in context. Their hope, the researchers say, is to raise awareness of the technology and promote conversations about its use.
Technology that measures emotions based on biometric indicators such as facial movements, tone of voice or body movements is increasingly being marketed in China, researchers say, despite concerns about its accuracy and wider human rights implications. Drawing upon artificial intelligence, the tools range from cameras to help police monitor a suspect's face during an interrogation to eye-tracking devices in schools that identify students who are not paying attention. A report released this week from U.K.-based human rights group Article 19 identified dozens of companies offering such tools in the education, public security and transportation sectors in China. "We believe that their design, development, deployment, sale and transfers should be banned due to the racist foundations and fundamental incompatibility with human rights," said Vidushi Marda, a senior program officer at Article 19. Human emotions cannot be reliably measured and quantified by technology tools, said Shazeda Ahmed, a doctoral candidate studying cybersecurity at the University of California, Berkeley and the report's co-author. Such systems can perpetuate bias, especially those sold to police that purport to identify criminality based on biometric indicators, she added.
Artificial intelligence technology is advancing and bringing opportunities for society but also profound challenges for individual freedom. AI is a powerful enabler of surveillance technology, such as facial recognition, and many countries are grappling with appropriate rules for use, weighing the security benefits against privacy risks. Authoritarian regimes, however, lack strong institutional mechanisms to protect individual privacy--a free and independent press, civil society, an independent judiciary--and the result is the widespread use of AI for surveillance and repression. This dynamic is most acute in China, where the Chinese government is pioneering new uses of AI to monitor and control its population. China has already begun to export this technology along with laws and norms for illiberal uses to other nations.
The fight over the future of facial recognition is heating up. But it is just the beginning, as even more intrusive methods of surveillance are being developed in research labs around the world. In the US, San Francisco, Somerville and Oakland recently banned the use of facial recognition by law enforcement and government agencies, while Portland is talking about forbidding the use of facial recognition entirely, including by private businesses. A coalition of 30 civil society organisations, representing over 15 million members combined, is calling for a federal ban on the use of facial recognition by US law enforcement. Meanwhile in the UK, revelations that London's Metropolitan Police secretly provided facial recognition data to the developers of the Kings Cross Estate for a covert facial recognition system have sparked outrage and calls for an inquiry.
As lawmakers, citizens, and company's debate the use of facial recognition software in the U.S., tech giants in America and China have been busy hawking products to eager surveillance states abroad. Among the burgeoning markets, according to a report by Buzzfeed News, are monarchies in the United Arab Emirates (UAE), particularly in Dubai, where political leaders have often jailed citizens and journalists that they deem to be political dissidents. Critics of the UAE include Human Rights Watch (HRW) who has frequently derided the country for its authoritarian tendencies. Private companies like IBM are looking to governments accused of violating human rights as a market for facial recognition software. 'UAE authorities have launched a sustained assault on freedom of expression and association since 2011,' says HRW in its analysis.