Civil Rights & Constitutional Law


Dragonfly Eye: Artificial Intelligence Machine Can Identify 2 Billion People in Seconds

#artificialintelligence

Yitu Technology, based in Shanghai, China has developed and employed an artificial intelligence (A.I.) algorithm called Dragonfly Eye that uses facial recognition technology capable of identifying 2 billion people in seconds. Zhu Long, CEO of Yitu Technologies, told the South China Morning Post, "Our machines can very easily recognise you among at least 2 billion people in a matter of seconds, which would have been unbelievable just three years ago." Dragonfly Eye is presently used by 150 municipal public security systems and 20 provincial public security departments across the country of China. Dragonfly Eye was initially employed on the Shanghai Metro in Shanghai, China, during January of this year. Local police authorities credit Dragonfly Eye with aiding in the arrest of 576 suspects on the Shanghai Metro in the first three months of using the facial recognition system.


Shanghai Subway Surveillance AI Has Database of 2 Billion Faces

#artificialintelligence

The AI algorithm, the name of which can be translated as either Dragon Eye or Dragonfly Eye, was developed by Shanghai-based tech firm Yitu. It works off of China's national database, which consists of all 1.3 billion residents of the Asian nation as well as 500 million more people who have entered the country at some point. Dragon Eye interfaces with the database to detect the faces of individuals. Yitu chief executive and co-founder Zhu Long told the South China Morning Post (SCMP) that the purpose of the algorithm is to fight crime and make the world a safer place. "Let's say that we live in Shanghai, a city of 24 million people.


China's artificial intelligence is catching criminals and advancing health care - Socializing AI

#artificialintelligence

Zhu Long, co-founder and CEO of Yitu Technology, has his identity checked at the company's headquarters in the Hongqiao business district in Shanghai. "Our machines can very easily recognise you among at least 2 billion people in a matter of seconds," says chief executive and Yitu co-founder Zhu Long, "which would have been unbelievable just three years ago." Its platform is also in service with more than 20 provincial public security departments, and is used as part of more than 150 municipal public security systems across the country, and Dragonfly Eye has already proved its worth. On its very first day of operation on the Shanghai Metro, in January, the system identified a wanted man when he entered a station. After matching his face against the database, Dragonfly Eye sent his photo to a policeman, who made an arrest.


Dragon Eye Can Recognize Face Among Billions: Crime Fighter Or Big Brother?

International Business Times

A Shanghai company has claimed to have developed an AI that can recognize a face among at least two billion people in a matter of seconds. Yitu's AI algorithm Dragon Eye not only recognizes faces but with a network of connected cameras can plot the movement of their owners. "Our machines can very easily recognize you among at least two billion people in a matter of seconds," says chief executive and Yitu co-founder Zhu Long, "which would have been unbelievable just three years ago." As of now, the Dragon Eye platform has around 1.8 billion photographs to work with: those logged in China's national database and those who have ever entered through its borders. Talking to the South China Morning Post, Zhu said the objective of the algorithm is to make the world a much safer place by curbing crime.


doctor-border-guard-policeman-artificial

#artificialintelligence

The lifts rising to Yitu Technology's headquarters have no buttons. The pass cards of the staff and visitors stepping into the elevators that service floors 23 and 25 of a newly built sky scraper in Shanghai's Hongqiao business district are read automatically – no swipe required – and each passenger is deposited at their specified floor. The only way to beat the system and alight at a different floor is to wait for someone who does have access and jump out alongside them. Or, if this were a sci-fi thriller, you'd set off the fire alarms and take the stairs while everyone else was evacuating. But even in that scenario you'd be caught: Yitu's cameras record everyone coming into the building and tracks them inside.


Artificial Intelligence Has a Racism Issue

#artificialintelligence

It's long been thought that robots equipped with artificial intelligence would be the cold, purely objective counterpart to humans' emotional subjectivity. Unfortunately, it would seem that many of our imperfections have found their way into the machines. It turns out that these A.I. and machine-learning tools can have blind spots when it comes to women and minorities. This is especially concerning, considering that many companies, governmental organizations, and even hospitals are using machine learning and other A.I. tools to help with everything from preventing and treating injuries and diseases to predicting creditworthiness for loan applicants. These racial and gender biases have manifested in a variety of ways.


Artificial Intelligence: ARTICLE 19 calls for protection of freedom… · Article 19

#artificialintelligence

The submission stresses the need to critically evaluate the impact of Artificial Intelligence (AI) and automated decision making systems (AS) on human rights. Machine learning – the most successful subset of AI techniques – enables an algorithm to learn from a dataset using statistical methods. As such, AI has a direct impact on the ability of individuals to exercise their right to freedom of expression in the digital age. Development of AI is not new but advances in the digital environment – greater volumes of data, computational power, and statistical methods – will make it more enabling in the future.


When will AI stop being so racist?

#artificialintelligence

Learned bias can occur as the result of incomplete data or researcher bias in generating training data. Because sentencing systems are based on historical data, and black people have historically been arrested and convicted of more crimes, an algorithm could be designed in order to correct for bias that already exists in the system. When humans make mistakes, we tend to rationalize their shortcomings and forgive their mistakes--they're only human!--even if the bias displayed by human judgment is worse than bias displayed by an algorithm. In a follow-up study, Dietvorst shows that algorithm aversion can be reduced by giving people control over an algorithm's forecast.


AI Research Is in Desperate Need of an Ethical Watchdog

#artificialintelligence

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."


ai-research-is-in-desperate-need-of-an-ethical-watchdog

WIRED

Stanford's review board approved Kosinski and Wang's study. "The vast, vast, vast majority of what we call'big data' research does not fall under the purview of federal regulations," says Metcalf. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. The group also went through an ethics review at the company that provided training list of names, although Metcalf says that an evaluation at a private company is the "weakest level of review that they could do."