A decade ago, Fei-Fei Li, a professor of computer science at Stanford University, helped demonstrate the power of a new generation of powerful artificial intelligence algorithms. She created ImageNet, a vast collection of labeled images that could be fed to machine learning programs. Over time, that process helped machines master certain human skills remarkably well when they have enough data to learn from. Since then, AI programs have taught themselves to do more and more useful tasks, from voice recognition and language translation to operating warehouse robots and guiding self-driving cars. But AI algorithms have also demonstrated darker potential, for example as a means of automated facial recognition that can perpetuate race and gender bias.
Some of the biggest companies in the world are pulling their facial recognition technologies from law enforcement agencies across the country. Amazon (AMZN), IBM (IBM), and Microsoft (MSFT) have said that they will either put a moratorium on the use of their technology by police -- or are completely exiting the field citing human rights concerns. The technology, which can be used to identify suspects in things like surveillance footage, has faced widespread criticism after studies found it can be biased against women and people of color. And according to at least one expert, there needs to be some form of regulation put in place if these technologies are going to be used by law enforcement agencies. "If these technologies were to be deployed, I think you cannot do it in the absence of legislation," explained Siddharth Garg, assistant professor of computer science and engineering at NYU Tandon School of Engineering, told Yahoo Finance.
Artificial intelligence technologies offer a lot of potential to improve the world. Simulations could speed up disease and drug research, autonomous vehicles could cut energy use and its impact on the environment, and facial recognition could help quickly identify missing children. But there's a flip side to the good, and some major technology companies acknowledged the potential issues with facial recognition software last week, with IBM halting development while Amazon and Microsoft pledged to not sell the technology to the police for a set period of time. The moves come in the wake of incidents of police violence at widespread protests across the country in response to the death of George Floyd at the hands of police in Minneapolis in May. Privacy advocates have opposed the use of facial recognition software for years, saying it could be abused by the government to surveille and harass citizens.
That's why the announcements by IBM, Amazon and Microsoft were a success for activists -- a rare retreat by some of Silicon Valley's biggest names over a key new technology. This came from years of work by researchers including Joy Buolamwini to make the case that facial recognition software is biased. A test commissioned by the ACLU of Northern California found Amazon's software called Rekognition misidentified 28 lawmakers as people arrested in a crime. That happens in part because the systems are trained on data sets that are themselves skewed.
Mourners pay respects to George Floyd in Houston; reaction and analysis on'The Five.' IBM has quit the facial recognition technology business, citing concerns that it can be used for mass surveillance and racial profiling. The move comes amid ongoing protests following the death of George Floyd on May 25--while in police custody in Minneapolis--that have thrust racial injustice and police monitoring technology into the spotlight. The tech giant's CEO Arvind Krishna explained IBM's decision in a letter sent to U.S. lawmakers Monday. "IBM no longer offers general purpose IBM facial recognition or analysis software," he wrote. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency."
IBM announced this week that it would stop selling its facial recognition technology to customers including police departments. The move prompted calls for other tech firms, like Amazon and Microsoft, to do the same. IBM announced this week that it would stop selling its facial recognition technology to customers including police departments. The move prompted calls for other tech firms, like Amazon and Microsoft, to do the same. IBM will no longer provide facial recognition technology to police departments for mass surveillance and racial profiling, Arvind Krishna, IBM's chief executive, wrote in a letter to Congress.
Facial recognition software is nothing if not fallible. In 2019, the National Institute of Standards and Technology demonstrated this with a study on A.I. systems used by police departments to identify alleged criminals. The study found that these algorithms falsely identified Asian and black faces 10 to 100 times more often than Caucasian faces. It is these sorts of findings that have led activists to call for bans on facial recognition technology and for technology companies not to develop such products. That movement scored a win on Monday, when IBM CEO Arvind Krishna announced in a letter to Congress that the company will no longer develop, research, or sell facial recognition technology.
London (CNN Business)IBM is canceling its facial recognition programs and calling for an urgent public debate on whether the technology should be used in law enforcement. In a letter to Congress on Monday, IBM (IBM) CEO Arvind Krishna said the company wants to work with lawmakers to advance justice and racial equity through police reform, educational opportunities and the responsible use of technology. "We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," he said, noting that the company no longer offers general purpose facial recognition or analysis software. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values," he added. Krishna is of Indian origin and IBM's first CEO of color.
IBM is pulling out of the facial recognition market and is calling for "a national dialogue" on the technology's use in law enforcement. The abrupt about-face comes as technology companies are facing increased scrutiny over their contracts with police amid violent crackdowns on peaceful protest across America. In a public letter to Congress, IBM chief executive, Arvind Krishna, explained the company's decision to back out of the business, and declared an intention "to work with Congress in pursuit of justice and racial equity, focused initially in three key policy areas: police reform, responsible use of technology, and broadening skills and educational opportunities." The company, Krishna said, "no longer offers general purpose IBM facial recognition or analysis software. "IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and ...