Goto

Collaborating Authors

Bias detectives: the researchers striving to make algorithms fair

#artificialintelligence

In 2015, a worried father asked Rhema Vaithianathan a question that still weighs on her mind. A small crowd had gathered in a basement room in Pittsburgh, Pennsylvania, to hear her explain how software might tackle child abuse. Each day, the area's hotline receives dozens of calls from people who suspect that a child is in danger; some of these are then flagged by call-centre staff for investigation. But the system does not catch all cases of abuse. Vaithianathan and her colleagues had just won a half-million-dollar contract to build an algorithm to help. Vaithianathan, a health economist who co-directs the Centre for Social Data Analytics at the Auckland University of Technology in New Zealand, told the crowd how the algorithm might work. For example, a tool trained on reams of data -- including family backgrounds and criminal records -- could generate risk scores when calls come in.


Elon Musk and 115 other experts ask the UN to ban killer robots in open letter

Mashable

Elon Musk, Google DeepMind co-founder Mustafa Suleyman, and 114 other leading AI and robotics experts have joined together to ask the UN to ban the use of so-called killer robots in an open letter published today. The group is concerned about the potential use of lethal autonomous weapons and how they might be applied in the future, and they penned a short note released by the Future of Life Institute. The text was made public to kick off the opening of the International Joint Conference on Artificial Intelligence (IJCAI 2017) in Melbourne, Australia, according to a press release. "Lethal autonomous weapons" refers to the drones, autonomous machine guns, tanks, and other forms of weaponry controlled by AI on next-generation battlefields. Musk, for one, is famously wary of AI's potential to go bad, recently calling it "the greatest threat we face as a civilization," above even nuclear weapons -- but the open letter is the first time a group of AI and robotics companies have joined forces to petition the UN specifically about autonomous weapons, according to the release.


Elon Musk And Over 100 AI Experts Are Urging The UN to Ban Killer Robots

#artificialintelligence

Elon Musk and more than 100 leaders and experts in artificial intelligence (AI) have come together urging the UN to commit to an outright ban on killer robot technology. An open letter signed by Musk, Google Deepmind's Mustafa Suleyman, and 114 other AI and robotics specialists urges the UN to prevent "the third revolution in warfare" by banning the development of all lethal autonomous weapon systems. The open letter, released to coincide with the world's largest conference on AI – IJCAI 2017, which is taking place in Melbourne, Australia this week – warns of a near future where independent machines will be able to choose and engage their own targets, including innocent humans in addition to enemy combatants. "Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend," the consortium writes. "These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways."


Animals evolved 'extreme weapons' through duels, scientists say after forcing artificial intelligence to fight each other

The Independent - Tech

Simulated warfare between artificial intelligence participants has revealed that "extraordinary forms" of extreme weaponry evolve when combatants fight each other in one-to-one in duels. Researchers at the University of Auckland in New Zealand pitted AI players against each other in a war game to better understand how animals evolve weapons. They found that combatants with improved weapons had a large advantage when fighting in duels, but that this advantage deteriorated when there were more rivals to fight against. The findings suggest that arms races between animals and in other types of conflict are more likely to be accelerated when there are only two opponents. The study was based on a current evolutionary hypothesis that predicts the evolution of elaborate weaponry in duel-based systems, such as the exaggerated horns wielded by male dung beetles and stag deer when fighting over females.


Rise of the machines?

FOX News

But it could be a real threat, warn researchers at the recent World Economic Forum. Unlike today's drones, which are still controlled by human operators, autonomous weapons could potentially be programmed to select and engage targets on their own. "It was one of the concerns that we itemized last year," Toby Walsh, professor of artificial intelligence (AI) at the school of computer science and engineering at the University of New South Wales, told FoxNews.com. "Most of us believe that we don't have the ability to build ethical robots," he added. "What is especially worrying is that the various militaries around the world will be fielding robots in just a few years, and we don't think anyone will be building ethical robots."