The term "artificial intelligence" historically refers to systems that attempt to mimic or replicate human thought. This is not an accurate description of the actual science of artificial intelligence, and it implies a false choice between artificial and natural intelligences. That is why IBM and others have chosen to use different language to describe our work in this field. We feel that "cognitive computing" or "augmented intelligence" -- which describes systems designed to augment human thought, not replicate it -- are more representative of our approach. There is little commercial or societal imperative for creating "artificial intelligence."
This article is the fifth in a series about how business leaders can become better prepared for managing the AI disruption. AI is great at cognitive thinking but terrible at ethical thinking. So bad at ethical judgment, in fact, that questions of ethics are likely to remain one of the most challenging aspects of developing large-scale commercial applications of AI. The ethical and moral implications of AI can impact business, society, or both at the same time. Consider the Google employees who resigned -- and the thousands who co-signed a letter to their CEO -- in protest of Pentagon-funded projects.
We are in the foothills of an AI journey. On April 8 the EU issued in-depth guidelines on developing and implementing trustworthy Artificial Intelligence. The guidelines identify fundamental requirements for AI in Europe and set a global standard for efforts to advance AI that is Ethical and Responsible. "It's like putting the foundations in before you build a house. Now is the time to do it," said Liam Benham, Vice President, Government and Regulatory Affairs, in Europe.
Global consulting and research firm Capgemini just released a brand new report exploring the opinions of dozens of industry experts on the ethics of using artificial intelligence (AI). The report, Conversations, collects the responses of experts from Harvard, Oxford University, Bayer, AXA and more, who offer critical insights on a range of ethical questions that the proliferation of AI has unleashed. "AI is set to radically change the way organisations manage their businesses, and is a revolutionary technology that will change the world we live in," commented Jerome Buvat, Global Head of the Capgemini Research Institute. "The interviews with leaders and practitioners for this new report emphasised its far-reaching implications, and how there is a need to infuse ethics into the design of AI algorithms. They also placed immense importance on the need to make AI transparent, and understandable, in order to build greater trust."
AI holds fantastic opportunities for large and small-medium organisations alike, and businesses are right to embrace them. Be it to improve back office operations, maximise marketing efforts or deploy predictive technologies to allocate resources more efficiently, algorithms have a lot to offer and we are seeing many organisations deploying AI systems already. Talking with industries as well as policy makers, I notice that we all seem to share the same belief, that is that innovation and ethics can go hand in hand. In fact, many believe that businesses that can utilise data, and do so ethically, have a clear competitive advantage. But how do we turn ethics into practice?