Racist artificial intelligence? Maybe not, if computers explain their 'thinking'

#artificialintelligence

Growing concerns about how artificial intelligence (AI) makes decisions has inspired U.S. researchers to make computers explain their "thinking." "Computers are going to become increasingly important parts of our lives, if they aren't already, and the automation is just going to improve over time, so it's increasingly important to know why these complicated systems are making the decisions that they are," assistant professor of computer science at the University of California Irvine, Sameer Singh, told CTV's Your Morning on Tuesday. Singh explained that, in almost every application of machine learning and AI, there are cases where the computers do something completely unexpected. "Sometimes it's a good thing, it's doing something much smarter than we realize," he said. Such was the case with the Microsoft AI chatbot, Tay, which became racist in less than a day.


Machine Learning, OSS & Ethical Conduct

#artificialintelligence

"Machine learning is the subfield of computer science that "gives computers the ability to learn without being explicitly programmed" (Arthur Samuel, 1959).[1] Evolved from the study of pattern recognition and computational learning theory in artificial intelligence,[2] machine learning explores the study and construction of algorithms that can learn" or be trained to make predictions on based on input "data[3]" such algorithms are not bound by static program instructions, instead making their predictions or decisions according to models they themselves build from sample inputs. "Machine learning is employed in a range of computing tasks where designing and programming explicit algorithms is unfeasible; example applications include spam filtering, optical character recognition (OCR),[5] search engines and computer vision." Machine learning is also employed experimentally to detect patterns and linkages in seemingly random or unrelated data. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.


Killer robot used by Dallas police opens ethical debate

Associated Press

Dallas police respond after shots were fired during a protest over recent fatal shootings by police in Louisiana and Minnesota, Thursday, July 7, 2016, in Dallas. Snipers opened fire on police officers during protests; several officers were killed, police said. Dallas police respond after shots were fired during a protest over recent fatal shootings by police in Louisiana and Minnesota, Thursday, July 7, 2016, in Dallas. Snipers opened fire on police officers during protests; several officers were killed, police said.


The killer robot used by Dallas police appears to be a first

Associated Press

Dallas police respond after shots were fired during a protest over recent fatal shootings by police in Louisiana and Minnesota, Thursday, July 7, 2016, in Dallas. Snipers opened fire on police officers during protests; several officers were killed, police said. Dallas police respond after shots were fired during a protest over recent fatal shootings by police in Louisiana and Minnesota, Thursday, July 7, 2016, in Dallas. Snipers opened fire on police officers during protests; several officers were killed, police said.


Rights group files federal complaint against AI-hiring firm HireVue, citing 'unfair and deceptive' practices

#artificialintelligence

A prominent rights group is urging the Federal Trade Commission to take on the recruiting-technology company HireVue, arguing the firm has turned to unfair and deceptive trade practices in its use of face-scanning technology to assess job candidates' "employability." The Electronic Privacy Information Center, known as EPIC, on Wednesday filed an official complaint calling on the FTC to investigate HireVue's business practices, saying the company's use of unproven artificial-intelligence systems that scan people's faces and voices constituted a wide-scale threat to American workers. HireVue's "AI-driven assessments," which more than 100 employers have used on more than a million job candidates, use video interviews to analyze hundreds of thousands of data points related to a person's speaking voice, word selection and facial movements. The system then creates a computer-generated estimate of the candidates' skills and behaviors, including their "willingness to learn" and "personal stability." Candidates aren't told their scores, but employers can use those reports to decide whom to hire or disregard.