The term "artificial intelligence" historically refers to systems that attempt to mimic or replicate human thought. This is not an accurate description of the actual science of artificial intelligence, and it implies a false choice between artificial and natural intelligences. That is why IBM and others have chosen to use different language to describe our work in this field. We feel that "cognitive computing" or "augmented intelligence" -- which describes systems designed to augment human thought, not replicate it -- are more representative of our approach. There is little commercial or societal imperative for creating "artificial intelligence."
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. The organisation has published a discussion paper [PDF], Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail. "Australia's colloquial motto is a'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.
AI-powered loan and credit approval processes have been marred by unforeseen bias. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners. Unfortunately, there's no industry-standard, best-practices handbook on AI ethics for companies to follow--at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks. A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.
The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.
Artificial intelligence (AI) relies on big data and machine learning for myriad applications, from autonomous vehicles to algorithmic trading, and from clinical decision support systems to data mining. The availability of large amounts of data is essential to the development of AI. Given China's large population and business sector, both of which use digitized platforms and tools to an unparalleled extent, it may enjoy an advantage in AI. In addition, it has fewer constraints on the use of information gathered through the digital footprint left by people and companies. India has also taken a series of similar steps to digitize its economy, including biometric identity tokens, demonetization and an integrated goods and services tax.