If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
WASHINGTON, November 12, 2019 - The advent of artificial intelligence raises the concern of whether online algorithms harm or help user bias, experts said at a Tuesday Brookings panel. The remarkable lack of transparency is evident in how companies analyze algorithms, said Solon Barocas, information science professor at Cornell University. Before technology became ubiquitous, it was easier for people to recognized blatant discrimination from companies. Now, he said, it's more difficult to detect these signs from an online platform. The reasons creditors provide to customers for adverse decisions, Barocas said, are not entirely useful.
The child labor activist, who works for Indian NGO Bachpan Bachao Andolan, had launched a pilot program 15 months prior to match a police database containing photos of all of India's missing children with another one comprising shots of all the minors living in the country's child care institutions. He had just found out the results. "We were able to match 10,561 missing children with those living in institutions," he told CNN. "They are currently in the process of being reunited with their families." Most of them were victims of trafficking, forced to work in the fields, in garment factories or in brothels, according to Ribhu. This momentous undertaking was made possible by facial recognition technology provided by New Delhi's police.
A woman uses an ATM with facial recognition technology during the presentation of the new service by CaixaBank in Barcelona on February 14, 2019. So, I just got the new iPhone 11 Pro. I have to say, I pretty much love the facial recognition unlock feature. And no, Apple is not paying me to say that. Prior, I was a facial recognition skeptic.
It seems that the use of artificial intelligence in facial recognition technology is one that has grown the farthest so far. As ZDNet notes, so far companies like Microsoft have already developed facial recognition technology that can recognize facial expression (FR) with the use of emotion tools. But the limiting factor so far has been that these tools were limited to eight, so-called core states – anger, contempt, fear, disgust, happiness, sadness, surprise or neutral. Now steps in Japanese tech developer Fujitsu, with AI-based technology that takes facial recognition one step further in tracking expressed emotions. The existing FR technology is based, as ZDNet explains, on "identifying various action units (AUs) – that is, certain facial muscle movements we make and which can be linked to specific emotions."
An Israeli startup invested in heavily by American companies, including Microsoft, produces facial recognition software used to conduct biometric surveillance on Palestinians, investigations by NBC and Haaretz revealed. In June, Microsoft -- which has touted its framework for ethical use of facial recognition -- joined a group investment of $78 million to AnyVision, an international tech company based in Israel. One of AnyVision's flagship products is Better Tomorrow, a program that allows the tracking of objects and people on live video feeds, even tracking between independent camera feeds. AnyVision's facial recognition software is at the heart of a military mass surveillance project in the West Bank, according to the NBC and Haaretz reporting. An Israeli Defense Forces statement in February acknowledged the addition of facial recognition verification technology to at least 27 checkpoints between Israel and the West Bank to "upgrade the crossings" and, in an effort to "deter terror attacks," rapidly installed a network of over 1,700 cameras across the occupied territories.
NEW DELHI – As India prepares to install a nationwide facial recognition system in an effort to catch criminals and find missing children, human rights and technology experts warn of the risks to privacy from increased surveillance. Use of the camera technology is an effort in "modernizing the police force, information gathering, criminal identification, verification," according to India's national crime bureau. Likely to be among the world's biggest facial recognition systems, the government contract was due to be awarded Friday. But there is little information on where it will be deployed, what the data will be used for and how data storage will be regulated, said Apar Gupta, executive director of the nonprofit Internet Freedom Foundation. "It is a mass surveillance system that gathers data in public places without there being an underlying cause to do so," he said.
NEW DELHI – As India prepares to install a nationwide facial recognition system in an effort to catch criminals and find missing children, human rights and technology experts on Thursday warned of the risks to privacy from increased surveillance. Use of the camera technology is an effort in "modernizing the police force, information gathering, criminal identification, verification," according to India's national crime bureau. Likely to be among the world's biggest facial recognition systems, the government contract is due to be awarded Friday. But there is little information on where it will be deployed, what the data will be used for and how data storage will be regulated, said Apar Gupta, executive director of non-profit Internet Freedom Foundation. "It is a mass surveillance system that gathers data in public places without there being an underlying cause to do so," he said.
IBM weighed in Nov 5 on the policy debate over facial recognition technology, arguing against an outright ban but calling for "precision regulation" to protect privacy and civil liberties. In a white paper posted on its website, the US computing giant said policymakers should understand that "not all technology lumped under the umbrella of'facial recognition' is the same". IBM said uneasiness about artificial intelligence technology which can use face scans for identification was reasonable. "However, blanket bans on technology are not the answer to concerns around specific use cases," said the paper by IBM chief privacy officer Christina Montgomery and Ryan Hagemann, co-director the IBM policy lab. "Casting such a wide regulatory net runs the very real risk of cutting us off from the many – and potentially life-saving – benefits these technologies offer."
Artificial intelligence has already started to shape our lives in ubiquitous and occasionally invisible ways. In its new documentary, In The Age of AI, FRONTLINE examines the promise and peril this technology. AI systems are being deployed by hiring managers, courts, law enforcement, and hospitals -- sometimes without the knowledge of the people being screened. And while these systems were initially lauded for being more objective than humans, it's fast becoming clear that the algorithms harbor bias, too. It's an issue Joy Buolamwini, a graduate researcher at the Massachusetts Institute of Technology, knows about firsthand. She founded the Algorithmic Justice League to draw attention to the issue, and earlier this year she testified at a congressional hearing on the impact of facial recognition technology on civil rights. "One of the major issues with algorithmic bias is you may not know it's happening," Buolamwini told FRONTLINE.