Social & Ethical Issues


Microsoft's president met with the pope to talk about ethical AI

#artificialintelligence

The duo announced that they would jointly sponsor a prize for AI researchers working to develop ethical, responsible AI tech. The winner will be someone who presented a doctoral thesis on artificial intelligence tech that stands to benefit the common good, The Seattle Times reports. The $6,900 prize and an invitation to Microsoft's office in Seattle is paltry compared to what the two could have offered up, but the contest still represents an increasingly public drive to make sure that AI is developed with the needs of humanity in mind.


Frameworks Seek to Control AI

#artificialintelligence

AI governance frameworks are emerging as guard rails for controlling algorithms that are playing a growing role in human decision-making. Among the goals is managing the consequences of those decisions. Business consultants and professional services firms in particular have focused on new ways to assess and control AI algorithms as a way of building trust. Among them is KPMG, which launched a new framework this week called AI in Control designed to assess algorithms underlying business applications to spot bias and enforce governance rules to ensure ethical AI. The goal of KPMG's framework is fostering AI algorithms that are accurate, addressing what the company warns is the current "trust gap" among business executives clamoring for "explainable AI."


Scientists call for a ban on AI-controlled killer robots

#artificialintelligence

"We are not talking about walking, talking terminator robots that are about to take over the world; what we are concerned about is much more imminent: conventional weapons systems with autonomy," Human Right's Watch advocacy director Mary Wareham told the BBC. Another big question that arises: who is responsible when a machine does decide to take a human life? Is it the person who made the machine? "The delegation of authority to kill to a machine is not justified and a violation of human rights because machines are not moral agents and so cannot be responsible for making decisions of life and death," associate professor from the New School in New York Peter Asaro told the BBC. But not everybody is on board to fully denounce the use of AI-controlled weapon systems.


Should I Open-Source My Model? – Towards Data Science

#artificialintelligence

I have worked on the problem of open-sourcing Machine Learning versus sensitivity for a long time, especially in disaster response contexts: when is it right/wrong to release data or a model publicly? This article is a list of frequently asked questions, the answers that are best practice today, and some examples of where I have encountered them. The criticism of OpenAI's decision included how it limits the research community's ability to replicate the results, and how the action in itself contributes to media fear of AI that is hyperbolic right now. It was this tweet that first caught my eye. Anima Anankumar has a lot of experience bridging the gap between research and practical applications of Machine Learning.


Call to ban killer robots in wars

BBC News

A group of scientists has called for a ban on the development of weapons controlled by artificial intelligence (AI). It says that autonomous weapons may malfunction in unpredictable ways and kill innocent people. Ethics experts also argue that it is a moral step too far for AI systems to kill without any human intervention. The comments were made at the American Association for the Advancement of Science meeting in Washington DC. Human Rights Watch (HRW) is one of the 89 non-governmental organisations from 50 countries that have formed the Campaign to Stop Killer Robots, to press for an international treaty.


Killer robots should be banned to prevent them wiping out humanity, scientists warn

Daily Mail

Killer robots should be banned to prevent them wiping out humanity, the world's largest gathering of scientists was told yesterday. While full-blown android soldiers remain the stuff of science fiction, advances in artificial intelligence mean machines with the power to select and attack targets without human input could soon be developed. Such robots represent the'third revolution' in warfare after gunpowder and nuclear weapons, scientists and campaigners told the American Association for the Advancement of Science's annual meeting in Washington DC. Scientists have said that'killer robots' should be banned to prevent them wiping out humanity Mary Wareham, from the Campaign to Stop Killer Robots, said: 'Bold leadership is needed for a treaty. Backers include UN Secretary-General Antonio Guterres, who has called autonomous weapons'politically unacceptable and morally repugnant'.


This is how AI in video games will change the future of work

#artificialintelligence

While DeepMind innovates in many fields, games like StarCraft and Go before that, demonstrate a computer's ability to be intuitive. In this case, intuition means that the computer is able to act unconsciously, non-rationally, and quickly, surpassing ordinary processing to deeply understand the information and the situation at hand. Given that these games have a nearly infinite number of moves, DeepMind's successes show that the AI is aware of its environment and of other players.


Deadly US Applications of Artificial Intelligence

#artificialintelligence

In the United States and around the world, public concern is rising at the prospect of weapons systems that would select and attack targets without human intervention. As the United States Department of Defense releases a strategy on artificial intelligence (AI), questions loom about whether the US government intends to accelerate its investments in weapons systems that would select and engage targets without meaningful human control. The strategy considers a range of potential, mostly benign uses of AI and makes the bold claim that AI can help "reduce the risk of civilian casualties" by enabling greater accuracy and precision. The strategy commits to consider how to handle hacking, bias, and "unexpected behavior" among other concerns. Scientists have long warned about the potentially disastrous consequences that could arise when complex algorithms incorporated into fully autonomous weapons systems created and deployed by opposing forces meet in warfare.


Vatican, Microsoft team up on artificial intelligence ethics

#artificialintelligence

The Vatican says it is teaming up with Microsoft on an academic prize to promote ethics in artificial intelligence. Pope Francis met privately on Wednesday with Microsoft President Brad Smith and the head of a Vatican scientific office that promotes Catholic Church positions on human life. The Vatican said Smith and Archbishop Vincenzo Paglia of the Pontifical Academy for Life told Francis about the international prize for an individual who has successfully defended a dissertation on ethical issues involving artificial intelligence. The winner will receive 6,000 euros ($6,900) and an invitation to Microsoft's Seattle headquarters. The Vatican says Smith discussed artificial intelligence "at the service of the common good" during the papal meeting.


What are the biggest threats to humanity?

BBC News

Human extinction may be the stuff of nightmares but there are many ways in which it could happen. Popular culture tends to focus on only the most spectacular possibilities: think of the hurtling asteroid of the film Armageddon or the alien invasion of Independence Day. While a dramatic end to humanity is possible, focusing on such scenarios may mean ignoring the most serious threats we face in today's world. And it could be that we are able to do something about these. In 1815 an eruption of Mount Tambora, in Indonesia, killed more than 70,000 people, while hurling volcanic ash into the upper atmosphere.