Collaborating Authors

IBMVoice: Learning To Trust Artificial Intelligence Systems In The Age Of Smart Machines


The term "artificial intelligence" historically refers to systems that attempt to mimic or replicate human thought. This is not an accurate description of the actual science of artificial intelligence, and it implies a false choice between artificial and natural intelligences. That is why IBM and others have chosen to use different language to describe our work in this field. We feel that "cognitive computing" or "augmented intelligence" -- which describes systems designed to augment human thought, not replicate it -- are more representative of our approach. There is little commercial or societal imperative for creating "artificial intelligence."

How does society create an ethics guide for AI?


Coined the fourth industrial revolution, the advancement of artificial intelligence and machine learning brings interesting discussion to the table. Because AI is so comprehensive and covers several industries, we find ourselves asking obscure questions such as "Do we need to legalize predictive AI policing?" With these questions arising, the key one that remains unanswered surrounds ethics. How do we ensure that AI technologies are ethically designed? To answer this question, there are essentially four aspects that dictate the result: the dilemma, the impact, adoption, and institutionalization.

Intelligence without trust: a risky business


Companies and entire industries are looking to harness data analytics to make more accurate and effective decisions, within and across organizations. Such real-time and accurate insights have enabled boards and their management to be more effective in conducting their duties. Artificial intelligence (AI) mimics the learning function of the human brain, which means it could be deliberately or accidently corrupted and even adopt human biases, potentially resulting in mistakes and unethical decisions. Control of AI systems by the wrong hands is also a concern. Any AI system failure could have profound ramifications on security, decision-making and credibility, and may lead to costly litigation, reputational damage, regulatory scrutiny, and reduced stakeholder trust and profitability.

governance model for the application of AI in health care


As the efficacy of artificial intelligence (AI) in improving aspects of healthcare delivery is increasingly becoming evident, it becomes likely that AI will be incorporated in routine clinical care in the near future. This promise has led to growing focus and investment in AI medical applications both from governmental organizations and technological companies. However, concern has been expressed about the ethical and regulatory aspects of the application of AI in health care. These concerns include the possibility of biases, lack of transparency with certain AI algorithms, privacy concerns with the data used for training AI models, and safety and liability issues with AI application in clinical environments. While there has been extensive discussion about the ethics of AI in health care, there has been little dialogue or recommendations as to how to practically address these concerns in health care. In this article, we propose a governance model that aims to not only address the ethical and regulatory issues that arise out of the application of AI in health care, but also stimulate further discussion about governance of AI in health care. Interest in AI has gone through cyclical phases of expectation and disappointment since the late 1950s because of poor-performing algorithms and computing infrastructure.1 However, the emergence of appropriate computing infrastructure, big data, and deep learning algorithms has reinvigorated interest in artificial intelligence (AI) technology and accelerated its adoption in various sectors.2 While recent approaches to AI, such as machine learning, have only been relatively recently applied to health care, the future looks promising because of the likelihood of improved healthcare outcomes.3,4

Six steps for developing an AI ethics framework


Managed the right way, the opportunities for application of artificial intelligence (AI) are endless--but managed the wrong way, so are the legal, regulatory, reputational, and financial risks. "A myriad of opportunities to leverage AI highlight why an ethical mindset is critical to protect an organization from unintended, unethical consequences," Maureen Mohlenkamp, a principal in Deloitte's risk and financial advisory practice, said during a recent Deloitte Webcast on AI ethics. In broad terms, artificial intelligence encompasses technologies that are designed to mimic human intelligence. Because AI's application is still in its early stages, companies across all industries have only just begun to scratch the surface of its full potential in the business world. "A myriad of opportunities to leverage AI highlight why an ethical mindset is critical to protect an organization from unintended, unethical consequences."