Civil Rights & Constitutional Law


The Rap of China Uses AI to Select First Female Judge

#artificialintelligence

A new series of last year's TV hit The Rap of China has kicked off and with it comes the show's first female judge joining the likes of Kris Wu and MC Hotdog. The selection of Hong Kong singer G.E.M., real name Gloria Tang Tsz-Kei, has raised some eyebrows among critics due to the use of artificial intelligence in her selection, as well as her hip-hop credentials. "G.E.M joining The Rap of China is a bit of an embarrassment as her previous image falls short in terms of rap elements," one music critic after learning of the singer's inclusion on the talent show's judging panel. Although the news has drawn widespread criticism from a host of music critics, the move was not unexpected. With G.E.M's involvement, this show hit a new peak in terms of viewing figures in its first week.


AI can be sexist and racist -- it's time to make it fair

#artificialintelligence

When Google Translate converts news articles written in Spanish into English, phrases referring to women often become'he said' or'he wrote'. Software designed to warn people using Nikon cameras when the person they are photographing seems to be blinking tends to interpret Asians as always blinking. Word embedding, a popular algorithm used to process and analyse large amounts of natural-language data, characterizes European American names as pleasant and African American ones as unpleasant. These are just a few of the many examples uncovered so far of artificial intelligence (AI) applications systematically discriminating against specific populations. Biased decision-making is hardly unique to AI, but as many researchers have noted1, the growing scope of AI makes it particularly important to address.


The cameras that know if you're happy - or a threat

BBC News

Facial recognition tech is becoming more sophisticated, with some firms claiming it can even read our emotions and detect suspicious behaviour. But what implications does this have for privacy and civil liberties? Facial recognition tech has been around for decades, but it has been progressing in leaps and bounds in recent years due to advances in computing vision and artificial intelligence (AI), tech experts say. It is now being used to identify people at borders, unlock smart phones, spot criminals, and authenticate banking transactions. But some tech firms are claiming it can also assess our emotional state.


How to stop artificial intelligence being so racist and sexist

New Scientist

Something is rotten at the heart of artificial intelligence. Machine learning algorithms that spot patterns in huge datasets, hold promise for everything from recommending if someone should be released on bail to estimating the likelihood of a driver having a car crash, and thus the cost of their insurance. But these algorithms also risk being discriminatory by basing their recommendations on categories like someone's sex, sexuality, or race. So far, all attempts to de-bias our algorithms have failed.


Safeguarding human rights in the era of artificial intelligence

#artificialintelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large. Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal.


Safeguarding human rights in the era of artificial intelligence

#artificialintelligence

The use of artificial intelligence in our everyday lives is on the increase, and it now covers many fields of activity. Something as seemingly banal as avoiding a traffic jam through the use of a smart navigation system, or receiving targeted offers from a trusted retailer is the result of big data analysis that AI systems may use. While these particular examples have obvious benefits, the ethical and legal implications of the data science behind them often go unnoticed by the public at large. Artificial intelligence, and in particular its subfields of machine learning and deep learning, may only be neutral in appearance, if at all. Underneath the surface, it can become extremely personal.


Facial Recognition And Future Scenarios

Forbes Technology

This photo taken on February 5, 2018 shows a police officer wearing a pair of smartglasses with a facial recognition system at Zhengzhou East Railway Station in Zhengzhou in China's central Henan province. Chinese police are sporting high-tech sunglasses that can spot suspects in a crowded train station, the newest use of facial recognition that has drawn concerns among human rights groups. We seem to be heading into a future where facial recognition technologies are going to be part of everyday life. Cities all over the world are now bristling with cameras, and in the case of China it is impossible to avoid being monitored either by CCTV or even by police wearing special glasses and then logged onto a database that checks on your habits, your social credit and even who your friends are. At the same time, cameras and facial recognition are increasingly being used in public and private buildings.


Google's new principles on AI need to be better at protecting human rights

#artificialintelligence

There are growing concerns about the potential risks of AI – and mounting criticism of technology giants. In the wake of what has been called an AI backlash or "techlash", states and businesses are waking up to the fact that the design and development of AI have to be ethical, benefit society and protect human rights. In the last few months, Google has faced protests from its own staff against the company's AI work with the US military. The US Department of Defense contracted Google to develop AI for analysing drone footage in what is known as "Project Maven". A Google spokesperson was reported to have said: "the backlash has been terrible for the company" and "it is incumbent on us to show leadership".


AI claims to be able to thwart facial recognition software, making you "invisible"

#artificialintelligence

A team of engineering researchers from the University of Toronto has created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview.


Researchers develop AI to fool facial recognition tech

#artificialintelligence

A team of engineering researchers from the University of Toronto have created an algorithm to dynamically disrupt facial recognition systems. Led by professor Parham Aarabi and graduate student Avishek Bose, the team used a deep learning technique called "adversarial training", which pits two artificial intelligence algorithms against each other. Aarabi and Bose designed a set of two neural networks, the first one identifies faces and the other works on disrupting the facial recognition task of the first. The two constantly battle and learn from each other, setting up an ongoing AI arms race. "The disruptive AI can'attack' what the neural net for the face detection is looking for," Bose said in an interview with Eureka Alert.