The Liability Problem for Autonomous Artificial Agents

AAAI Conferences

This paper describes and frames a central ethical issue–the liability problem–facing the regulation of artificial computational agents, including artificial intelligence (AI) and robotic systems, as they become increasingly autonomous, and supersede current capabilities. While it frames the issue in legal terms of liability and culpability, these terms are deeply imbued and interconnected with their ethical and moral correlate–responsibility. In order for society to benefit from advances in AI technology, it will be necessary to develop regulatory policies which manage the risk and liability of deploying systems with increasingly autonomous capabilities. However, current approaches to liability have difficulties when it comes to dealing with autonomous artificial agents because their behavior may be unpredictable to those who create and deploy them, and they will not be proper legal or moral agents. This problem is the motivation for a research project that will explore the fundamental concepts of autonomy, agency and liability; clarify the different varieties of agency that artificial systems might realize, including causal, legal and moral; and the illuminate the relationships between these. The paper will frame the problem of liability in autonomous agents, sketch out its relation to fundamental concepts in human legal and moral agency–including autonomy, agency, causation, intention, responsibility and culpability–and their applicability or inapplicability to autonomous artificial agents.


Translation: Excerpts from China's 'White Paper on Artificial Intelligence Standardization'

#artificialintelligence

This translation by Jeffrey Ding, edited by Paul Triolo, covers some of the most interesting parts of the Standards Administration of China's 2018 White Paper on Artificial Intelligence Standardization, a joint effort by more than 30 academic and industry organizations overseen by the Chinese Electronics Standards Institute. Ding, Triolo, and Samm Sacks describe the importance of this white paper and other Chinese government efforts to influence global AI development and policy formulation in their companion piece, "Chinese Interests Take a Big Seat at the AI Governance Table." Historical experience demonstrates that new technologies can often improve productivity and promote societal progress. But at the same time, as artificial intelligence (AI) is still in the early phrase of development, the policies, laws, and standards for safety, ethics, and privacy in this area are worthy of attention. In the case of AI technology, issues of safety, ethics, and privacy have a direct impact on people's trust in AI technology in their interaction experience with AI tools.


How AI can be a force for good

#artificialintelligence

Artificial intelligence (AI) is not just a new technology that requires regulation. It is a powerful force that is reshaping daily practices, personal and professional interactions, and environments. For the well-being of humanity it is crucial that this power is used as a force of good. Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks. AI may be defined in many ways.


How AI can be a force for good

Science

Artificial intelligence (AI) is not just a new technology that requires regulation. It is a powerful force that is reshaping daily practices, personal and professional interactions, and environments. For the well-being of humanity it is crucial that this power is used as a force of good. Ethics plays a key role in this process by ensuring that regulations of AI harness its potential while mitigating its risks. AI may be defined in many ways.


Artificial Intelligence and Machine Learning: Policy Paper

#artificialintelligence

AI traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language[1] . Several issues must be considered when addressing AI, including, socio-economic impacts; issues of transparency, bias, and accountability; new uses for data, considerations of security and safety, ethical issues; and, how AI facilitates the creation of new ecosystems. At the same time, in this complex field, there are specific challenges facing AI, which include: a lack of transparency and interpretability in decision-making; issues of data quality and potential bias; safety and security implications; considerations regarding accountability; and, its potentially disruptive impacts on social and economic structures. Artificial intelligence (AI) traditionally refers to an artificial creation of human-like intelligence that can learn, reason, plan, perceive, or process natural language.