"Killer Robots" worry the international community. From 13 to 17 November 2017, the Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS), also familiarly designed as "killer robots" met for the first time in Geneva (UN Office at Geneva). LAWS are, broadly speaking, autonomous systems (robots) animated by artificial intelligence, which can kill without human decision. As stated in a preliminary paper, the creation of the Group shows an international concern "with the implications for warfare of a new suite of technologies including artificial intelligence and deep machine learning" (UNODA Occasional Papers No. 30, "Perspectives on Lethal Autonomous Weapon Systems" November 2017: 1).
The U.S. and China are locked in an increasingly heated struggle for superpower status. Many perceived this confrontation initially only through the lenses of a trade war. However, the ZTE "saga" already indicated the issue was broader and involved a battle for supremacy over 21st century technologies and, relatedly, for international power (see When AI Started Creating AI – Artificial Intelligence and Computing Power, 7 May 2018). The technological battle increasingly looks like a fight to the death, with the offensive against Huawei, aiming notably to protect future 5G networks (Cassell Bryan-Low, Colin Packham, David Lague, Steve Stecklow And Jack Stubbs, "The China Challenge: the 5G Fight", Reuters Investigates, 21 May 2019). For Huawei and China, as well as for the world, consequences are far reaching, as, after Google "stopping Huawei's Android license", and an Intel and Qualcomm ban, the British chip designer ARM, held notably by Japanese Softbank, now stops relations with Huawei (Paul Sandle, "ARM supply halt deals fresh blow to Chinese tech giant Huawei", Reuters, 22 May 2019; "DealBook Briefing: The Huawei Backlash Goes Global", The New York Times, 23 May 2019; Tom Warren, "Huawei's Android And Windows Alternatives Are Destined For Failure", The Verge, 23 May 2019). The highly possible coming American move against Chinese Hikvision, one of the largest world producers of video surveillance systems involving notably "artificial intelligence, speech monitoring and genetic testing" would only further confirm the American offensive (Doina Chiacu, Stella Qi, "Trump says'dangerous' Huawei could be included in U.S.-China trade deal", Reuters, 23 May 2019; Ana Swanson and Edward Wong, "Trump Administration Could Blacklist China's Hikvision, a Surveillance Firm", The New York Times, 21 May 2019). China, for its part, answers to both the trade war and the technological fight with an ideologically martial mobilisation of its population along the lines of "People's War", "The Long March", and changing TV scheduling to broadcast war films (Iris Zhao and Alan Weedon, "Chinese television suddenly switches scheduling to anti-American films amid US-China trade war", ABC News, 20 May 2019; Michael Martina, David Lawder, "Prepare for difficult times, China's Xi urges as trade war simmers", Reuters, 22 May 2019). This highlights how much is as stake for the Middle Kingdom, as we explained previously ( Sensor and Actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics).
The explanation dimension of Artificial Intelligence (AI) based system has been a hot topic for the past years. Different communities have raised concerns about the increasing presence of AI in people's everyday tasks and how it can affect people's lives. There is a lot of research addressing the interpretability and transparency concepts of explainable AI (XAI), which are usually related to algorithms and Machine Learning (ML) models. But in decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system. Decision-makers usually need to justify their decision to others in different domains. If that decision is somehow based on or influenced by an AI-system outcome, the explanation about how the AI reached that result is key to building trust between AI and humans in decision-making scenarios. In this position paper, we discuss the role of XAI in decision-making scenarios, our vision of Decision-Making with AI-system in the loop, and explore one case from the literature about how XAI can impact people justifying their decisions, considering the importance of building the human-AI relationship for those scenarios.
Artificial intelligence (AI) systems operate in increasingly diverse areas, from healthcare to facial recognition, the stock market, autonomous vehicles, and so on. While the underlying digital infrastructure of AI systems is developing rapidly, each area of implementation is subject to different degrees and processes of legitimization. By combining elements from institutional theory and information systems-theory, this paper presents a conceptual framework to analyze and understand AI-induced field-change. The introduction of novel AI-agents into new or existing fields creates a dynamic in which algorithms (re)shape organizations and institutions while existing institutional infrastructures determine the scope and speed at which organizational change is allowed to occur. Where institutional infrastructure and governance arrangements, such as standards, rules, and regulations, still are unelaborate, the field can move fast but is also more likely to be contested. The institutional infrastructure surrounding AI-induced fields is generally little elaborated, which could be an obstacle to the broader institutionalization of AI-systems going forward.
Results released June 16, 2021 – Pew Research Center and Elon University's Imagining the Internet Center asked experts where they thought efforts aimed at ethical artificial intelligence design would stand in the year 2030. Some 602 technology innovators, developers, business and policy leaders, researchers and activists responded to this specific question. The Question – Regarding the application of AI Ethics by 2030: In recent years, there have been scores of convenings and even more papers generated proposing ethical frameworks for the application of artificial intelligence (AI). They cover a host of issues including transparency, justice and fairness, privacy, freedom and human autonomy, beneficence and non-maleficence, freedom, trust, sustainability and dignity. Our questions here seek your predictions about the possibilities for such efforts. By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public ...