A new generation of autonomous weapons or "killer robots" could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned. Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned. Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons. Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do "calamitous things that they were not originally programmed for". Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: "The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed. "There could be large-scale accidents because these things will start to behave in unexpected ways.
Artificial intelligence (AI) is set to play a key role in the future of financial services and more broadly in what UBS and the World Economic Forum refer to as the "Fourth Industrial Revolution." The global economy is on the cusp of profound changes driven by "extreme automation" and "extreme connectivity." In this changing economic landscape, AI is expected to be a pervasive feature, allowing to automate some of the skills that formerly only humans possessed. In the financial services industry in particular, there has been a lot of noise around the potential of AI and data supports that investors are excited about the impact the technology could have across the industry. VC-backed fintech AI companies raised approximately US$2.22 billion in funding in 2018, nearly twice as much as 2017's record.
In 2016, the Johannesburg team at IBM Research discovered that the process of reporting cancer data to the government, which used it to inform national health policies, took four years after diagnosis in hospitals. In the US, the equivalent data collection and analysis takes only two years. The additional lag turned out to be due in part to the unstructured nature of the hospitals' pathology reports. Human experts were reading each case and classifying it into one of 42 different cancer types, but the free-form text on the reports made this very time-consuming. So the researchers went to work on a machine-learning model that could label the reports automatically.
After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC--a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing. How did things go so wrong? And can Google put them right?
Google's attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company's cloud computing conference, building on his predecessor's strategy of emphasizing Google's strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its--and its customers'--AI projects to ethical reviews. They have caused Google to turn away some business.
The UK government is among a group of countries that are attempting to thwart plans to formulate and impose a pre-emptive ban on killer robots. Delegates have been meeting at the UN in Geneva all week to discuss potential restrictions under international law to so-called lethal autonomous weapons systems, which use artificial intelligence to help decide when and who to kill. Most states taking part – and particularly those from the global south – support either a total ban or strict legal regulation governing their development and deployment, a position backed by the UN secretary general, António Guterres, who has described machines empowered to kill as "morally repugnant". But the UK is among a group of states – including Australia, Israel, Russia and the US – speaking forcefully against legal regulation. As discussions operate on a consensus basis, their objections are preventing any progress on regulation.
A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence. A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence (AI), warning they may malfunction unpredictably and kill innocent people. The coalition has established the Campaign to Stop Killer Robots to lobby for an international accord. Said Human Rights Watch's Mary Wareham, autonomous weapons "are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly, and land on their own; robotic sentries that can identify movement."
There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict, healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision--and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI "black box"?