Both the ethics of autonomous systems and the problems of their technical implementation have by now been studied in some detail. Less attention has been given to the areas in which these two separate concerns meet. This paper, written by both philosophers and engineers of autonomous systems, addresses a number of issues in machine ethics that are located at precisely the intersection between ethics and engineering. We first discuss the main challenges which, in our view, machine ethics posses to moral philosophy. We them consider different approaches towards the conceptual design of autonomous systems and their implications on the ethics implementation in such systems. Then we examine problematic areas regarding the specification and verification of ethical behavior in autonomous systems, particularly with a view towards the requirements of future legislation. We discuss transparency and accountability issues that will be crucial for any future wide deployment of autonomous systems in society. Finally we consider the, often overlooked, possibility of intentional misuse of AI systems and the possible dangers arising out of deliberately unethical design, implementation, and use of autonomous robots.
The challenge of establishing assurance in autonomy is rapidly attracting increasing interest in the industry, government, and academia. Autonomy is a broad and expansive capability that enables systems to behave without direct control by a human operator. To that end, it is expected to be present in a wide variety of systems and applications. A vast range of industrial sectors, including (but by no means limited to) defense, mobility, health care, manufacturing, and civilian infrastructure, are embracing the opportunities in autonomy yet face the similar barriers toward establishing the necessary level of assurance sooner or later. Numerous government agencies are poised to tackle the challenges in assured autonomy. Given the already immense interest and investment in autonomy, a series of workshops on Assured Autonomy was convened to facilitate dialogs and increase awareness among the stakeholders in the academia, industry, and government. This series of three workshops aimed to help create a unified understanding of the goals for assured autonomy, the research trends and needs, and a strategy that will facilitate sustained progress in autonomy. The first workshop, held in October 2019, focused on current and anticipated challenges and problems in assuring autonomous systems within and across applications and sectors. The second workshop held in February 2020, focused on existing capabilities, current research, and research trends that could address the challenges and problems identified in workshop. The third event was dedicated to a discussion of a draft of the major findings from the previous two workshops and the recommendations.
After more than a decade of intense focus on automated vehicles, we are still facing huge challenges for the vision of fully autonomous driving to become a reality. The same "disillusionment" is true in many other domains, in which autonomous Cyber-Physical Systems (CPS) could considerably help to overcome societal challenges and be highly beneficial to society and individuals. Taking the automotive domain, i.e. highly automated vehicles (HAV), as an example, this paper sets out to summarize the major challenges that are still to overcome for achieving safe, secure, reliable and trustworthy highly automated resp. autonomous CPS. We constrain ourselves to technical challenges, acknowledging the importance of (legal) regulations, certification, standardization, ethics, and societal acceptance, to name but a few, without delving deeper into them as this is beyond the scope of this paper. Four challenges have been identified as being the main obstacles to realizing HAV: Realization of continuous, post-deployment systems improvement, handling of uncertainties and incomplete information, verification of HAV with machine learning components, and prediction. Each of these challenges is described in detail, including sub-challenges and, where appropriate, possible approaches to overcome them. By working together in a common effort between industry and academy and focusing on these challenges, the authors hope to contribute to overcome the "disillusionment" for realizing HAV.
Verified artificial intelligence (AI) is the goal of designing AI-based systems that are provably correct with respect to mathematically-specified requirements. This paper considers Verified AI from a formal methods perspective. We describe five challenges for achieving Verified AI, and five corresponding principles for addressing these challenges.
Belloni, Aline (Ardans SA) | Berger, Alain (Ardans SA) | Boissier, Olivier (ENS Mines Saint-Etienne) | Bonnet, Grégory (Normandie Université) | Bourgne, Gauvain (Pierre and Marie Curie University) | Chardel, Pierre-Antoine (Telecom Management School) | Cotton, Jean-Pierre (Ardans SA) | Evreux, Nicolas (Ardans SA) | Ganascia, Jean-Gabriel (Pierre and Marie Curie University) | Jaillon, Philippe (ENS Mines Saint-Etienne) | Mermet, Bruno (Normandie University) | Picard, Gauthier (ENS Mines Saint-Etienne) | Rever, Bernard (Paris Descartes University) | Simon, Gaële (Normandie University) | Swarte, Thibault de (Telecom Management School) | Tessier, Catherine (Onera) | Vexler, François (Ardans SA) | Voyer, Robert (Telecom Management School) | Zimmermann, Antoine (ENS Mines Saint-Etienne)
Autonomy and agency are a central property in robotic systems, human-machine interfaces, e-business, ambient intelligence and assisted living applications. As the complexity of the situations the autonomous agents may encounter in such contexts is increasing, the decisions those agents make must integrate new issues, e.g. decisions involving contextual ethical considerations. Consequently contributions have proposed recommendations, advice or hard-wired ethical principles for systems of autonomous agents. However, socio-technical systems are more and more open and decentralized, and involve autonomous artificial agents interacting with other agents, human operators or users. For such systems, novel and original methods are needed to address contextual ethical decision-making, as decisions are likely to interfere with one another. This paper aims at presenting the ETHICAA project (Ethics and Autonomous Agents) whose objective is to define what should be an autonomous entity that could manage ethical conflicts. As a first proposal, we present various practical case studies of ethical conflicts and highlight what their main system and decision features are.