Goto

Collaborating Authors

Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions

#artificialintelligence

Artificial intelligence (AI) may play an increasingly essentialFootnote 1 role in criminal acts in the future. Criminal acts are defined here as any act (or omission) constituting an offence punishable under English criminal law,Footnote 2 without loss of generality to jurisdictions that similarly define crime. Evidence of "AI-Crime" (AIC) is provided by two (theoretical) research experiments. In the first one, two computational social scientists (Seymour and Tully 2016) used AI as an instrument to convince social media users to click on phishing links within mass-produced messages. Because each message was constructed using machine learning techniques applied to users' past behaviours and public profiles, the content was tailored to each individual, thus camouflaging the intention behind each message. If the potential victim had clicked on the phishing link and filled in the subsequent web-form, then (in real-world circumstances) a criminal would have obtained personal and private information that could be used for theft and fraud. AI-fuelled crime may also impact commerce. In the second experiment, three computer scientists (Martínez-Miranda et al. 2016) simulated a market and found that trading agents could learn and execute a "profitable" market manipulation campaign comprising a set of deceitful false-orders. These two experiments show that AI provides a feasible and fundamentally novel threat, in the form of AIC. The importance of AIC as a distinct phenomenon has not yet been acknowledged. The literature on AI's ethical and social implications focuses on regulating and controlling AI's civil uses, rather than considering its possible role in crime (Kerr 2004).


The Liability Problem for Autonomous Artificial Agents

AAAI Conferences

This paper describes and frames a central ethical issue–the liability problem–facing the regulation of artificial computational agents, including artificial intelligence (AI) and robotic systems, as they become increasingly autonomous, and supersede current capabilities. While it frames the issue in legal terms of liability and culpability, these terms are deeply imbued and interconnected with their ethical and moral correlate–responsibility. In order for society to benefit from advances in AI technology, it will be necessary to develop regulatory policies which manage the risk and liability of deploying systems with increasingly autonomous capabilities. However, current approaches to liability have difficulties when it comes to dealing with autonomous artificial agents because their behavior may be unpredictable to those who create and deploy them, and they will not be proper legal or moral agents. This problem is the motivation for a research project that will explore the fundamental concepts of autonomy, agency and liability; clarify the different varieties of agency that artificial systems might realize, including causal, legal and moral; and the illuminate the relationships between these. The paper will frame the problem of liability in autonomous agents, sketch out its relation to fundamental concepts in human legal and moral agency–including autonomy, agency, causation, intention, responsibility and culpability–and their applicability or inapplicability to autonomous artificial agents.


Artificial intelligence and its legal challenges Lexology

#artificialintelligence

Is there a greater challenge than to write a legal article on an emerging technology that does not exist yet in its absolute form? Artificial intelligence, through a broad spectrum of branches and applications, will impact corporate and business integrity, corporate governance, distribution of financial products and services, intellectual property rights, privacy and data protection, employment, civil and contractual liability, and a significant number of other legal fields. Artificial intelligence is "the science and engineering of making intelligence machines, especially intelligent computer programs".1 Essentially, artificial intelligence technologies aim to allow machines to mimic "cognitive" functions of humans, such as learning and problem solving, in order for them to conduct tasks that are normally performed by humans. In practice, the functions of artificial intelligence are achieved by accessing and analyzing massive data (also known as "big data") via certain algorithms. As set forth in a report published by McKinsey & Company in 2013 on disruptive technologies, "[i]mportant technologies can come in any field or emerge from any scientific discipline, but they share four characteristics: high rate of technological change, broad potential scope of impact, large economic value that could be affected, and substantial potential for disruptive economic impact".2


Should an artificial intelligence be allowed to get a patent?

#artificialintelligence

Whether an A.I. ought to be granted patent rights is a timely question given the increasing proliferation of A.I. in the workplace. Examples: Daimler-Benz has tested self-driving trucks on public roads[1], A.I. technology has been applied effectively in medical advancements, psycholinguistics, tourism and food preparation,[2] a film written by an A.I. recently debuted online[3] and A.I. has even found its way into the legal profession,[4] and current interest in the question of whether an A.I. can enjoy copyright rights with several articles having already being published on the subject of A.I. and copyright rights.[5] In 2014 the U.S. Copyright Office updated its Compendium of U.S. Copyright Office Practices with, inter alia, a declaration that the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author."[6] One might argue that Intellectual Property (IP) laws and IP Rights were designed to exclusively benefit human creators and inventors[7] and thus would exclude non-humans from holding IP rights. The U.S. Copyright Office's December 2014 update to the Compendium of U.S. Copyright Office Practices that added requirements for human authorship[8] certainly adds weight to this view.


Teaching AI, Ethics, Law and Policy

arXiv.org Artificial Intelligence

The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.