After little more than a week, Google backtracked on creating its Advanced Technology External Advisory Council, or ATEAC--a committee meant to give the company guidance on how to ethically develop new technologies such as AI. The inclusion of the Heritage Foundation's president, Kay Coles James, on the council caused an outcry over her anti-environmentalist, anti-LGBTQ, and anti-immigrant views, and led nearly 2,500 Google employees to sign a petition for her removal. Instead, the internet giant simply decided to shut down the whole thing. How did things go so wrong? And can Google put them right?
Google's attempt to wrest more cloud computing dollars from market leaders Amazon and Microsoft got a new boss late last year. Next week, Thomas Kurian is expected to lay out his vision for the business at the company's cloud computing conference, building on his predecessor's strategy of emphasizing Google's strength in artificial intelligence. That strategy is complicated by controversies over how Google and its clients use the powerful technology. After employee protests over a Pentagon contract in which Google trained algorithms to interpret drone imagery, the cloud unit now subjects its--and its customers'--AI projects to ethical reviews. They have caused Google to turn away some business.
The UK government is among a group of countries that are attempting to thwart plans to formulate and impose a pre-emptive ban on killer robots. Delegates have been meeting at the UN in Geneva all week to discuss potential restrictions under international law to so-called lethal autonomous weapons systems, which use artificial intelligence to help decide when and who to kill. Most states taking part – and particularly those from the global south – support either a total ban or strict legal regulation governing their development and deployment, a position backed by the UN secretary general, António Guterres, who has described machines empowered to kill as "morally repugnant". But the UK is among a group of states – including Australia, Israel, Russia and the US – speaking forcefully against legal regulation. As discussions operate on a consensus basis, their objections are preventing any progress on regulation.
A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence. A scientific coalition is urging a ban on the development of weapons governed by artificial intelligence (AI), warning they may malfunction unpredictably and kill innocent people. The coalition has established the Campaign to Stop Killer Robots to lobby for an international accord. Said Human Rights Watch's Mary Wareham, autonomous weapons "are beginning to creep in. Drones are the obvious example, but there are also military aircraft that take off, fly, and land on their own; robotic sentries that can identify movement."
There is widespread public support for a ban on so-called "killer robots", which campaigners say would "cross a moral line" after which it would be difficult to return. Polling across 26 countries found over 60 per cent of the thousands asked opposed lethal autonomous weapons that can kill with no human input, and only around a fifth backed them. The figures showed public support was growing for a treaty to regulate these controversial new technologies - a treaty which is already being pushed by campaigners, scientists and many world leaders. However, a meeting in Geneva at the close of last year ended in a stalemate after nations including the US and Russia indicated they would not support the creation of such a global agreement. Mary Wareham of Human Rights Watch, who coordinates the Campaign to Stop Killer Robots, compared the movement to successful efforts to eradicate landmines from battlefields.
AI promises to be a boon to medical practice, improving diagnoses, personalizing treatment, and spotting future public-health threats. By 2024, experts predict, healthcare AI will be a nearly $20 billion market, with tools that transcribe medical records, assist surgery, and investigate insurance claims for fraud. Even so, the technology raises some knotty ethical questions. What happens when an AI system makes the wrong decision--and who is responsible if it does? How can clinicians verify, or even understand, what comes out of an AI "black box"?
A report from the recent conference on Computers, Privacy and Data Protection suggested that the European Commission is "considering the possibility of legislating for Artificial Intelligence." Karolina Mojzesowicz, Deputy Head, Data Protection Unit at the European Commission, said that the Commission is "assessing whether national and EU frameworks are fit for purpose for the new challenges." The Commission is exploring, for instance, whether to specify "how big a margin of error is acceptable in automated decisions and machine learning." The vehicle for this regulatory effort seems to be the draft Ethics Guidelines developed by a high-level expert group. The comment period on this draft closed on February 1, and a final report is due in March.
A hitchhiking robot was beheaded in Philadelphia. A security robot was punched to the ground in Silicon Valley. Another security bot, in San Francisco, was covered in a tarp and smeared with barbecue sauce. Why do people lash out at robots, particularly those built to resemble humans? It is a global phenomenon. In a mall in Osaka, Japan, three boys beat a humanoid robot with all their strength. In Moscow, a man attacked a teaching robot named Alantim with a baseball bat, kicking it to the ground, while the robot pleaded for help.
When the Montour School District launched America's first Artificial Intelligence Middle School program in the fall of 2018, many questions arose. How? (Just to name a few). But, as a student-centered and future-focused district, the thought process was not if we should teach AI, but what if we don't teach AI? Also, why isn't everyone teaching AI? Through a series of courses developed and implemented by Montour team members and partners, the AI program officially launched in October 2018. To date, hundreds of class have already been taught to students in areas of AI Ethics, AI Autonomous Robotics, AI Computer Science, and AI Music. The goal for the program is to make an all-inclusive AI program for all middle school students that is relevant and meaningful in a world where children live and prepare them for a future where they will thrive.