The term "artificial intelligence" historically refers to systems that attempt to mimic or replicate human thought. This is not an accurate description of the actual science of artificial intelligence, and it implies a false choice between artificial and natural intelligences. That is why IBM and others have chosen to use different language to describe our work in this field. We feel that "cognitive computing" or "augmented intelligence" -- which describes systems designed to augment human thought, not replicate it -- are more representative of our approach. There is little commercial or societal imperative for creating "artificial intelligence."
Companies and entire industries are looking to harness data analytics to make more accurate and effective decisions, within and across organizations. Such real-time and accurate insights have enabled boards and their management to be more effective in conducting their duties. Artificial intelligence (AI) mimics the learning function of the human brain, which means it could be deliberately or accidently corrupted and even adopt human biases, potentially resulting in mistakes and unethical decisions. Control of AI systems by the wrong hands is also a concern. Any AI system failure could have profound ramifications on security, decision-making and credibility, and may lead to costly litigation, reputational damage, regulatory scrutiny, and reduced stakeholder trust and profitability.
The Commonwealth Scientific and Industrial Research Organisation (CSIRO) has highlighted a need for development of artificial intelligence (AI) in Australia to be wrapped with a sufficient framework to ensure nothing is set onto citizens without appropriate ethical consideration. The organisation has published a discussion paper [PDF], Artificial Intelligence: Australia's Ethics Framework, on the key issues raised by large-scale AI, seeking answers to a handful of questions that are expected to inform the government's approach to AI ethics in Australia. Highlighted by CSIRO are eight core principles that will guide the framework: That it generates net-benefits, does no harm, complies with regulatory and legal requirements, appropriately considers privacy, boasts fairness, is transparent and easily explained, contains provisions for contesting a decision made by a machine, and that there is an accountability trail. "Australia's colloquial motto is a'fair go' for all. Ensuring fairness across the many different groups in Australian society will be challenging, but this cuts right to the heart of ethical AI," CSIRO wrote.
AI-powered loan and credit approval processes have been marred by unforeseen bias. Smart speakers have secretly turned on and recorded thousands of minutes of audio of their owners. Unfortunately, there's no industry-standard, best-practices handbook on AI ethics for companies to follow--at least not yet. Some large companies, including Microsoft and Google, are developing their own internal ethical frameworks. A number of think tanks, research organizations, and advocacy groups, meanwhile, have been developing a wide variety of ethical frameworks and guidelines for AI.
The cyberspace and the development of new technologies, especially intelligent systems using artificial intelligence, present enormous challenges to computer professionals, data scientists, managers and policy makers. There is a need to address professional responsibility, ethical, legal, societal, and policy issues. This paper presents problems and issues relevant to computer professionals and decision makers and suggests a curriculum for a course on ethics, law and policy. Such a course will create awareness of the ethics issues involved in building and using software and artificial intelligence.