AI development must be guided by ethics, human wellbeing and responsible innovation

#artificialintelligence 

The topic of ethics and artificial intelligence is not new, but businesses and policy creators should prioritize human wellbeing and environmental flourishing – also known as societal value – in the discussion, says John C. Havens, director of emerging technology and strategic development at the IEEE Standards Association. Typically, ethical concerns tied to AI largely focus on risk, harm and responsibility; bias against race and gender; unintended consequences; and cybersecurity and hackers. These are important concerns, but Havens contends that as AI systems are created, they must directly address human-centric, values-driven issues as key performance indicators of success to build trust with end users. Havens further says that AI systems must also prioritize human wellbeing (specifically, aspects of caregiving, mental health and physiological needs not currently included in the GDP) and environmental flourishing as the ultimate metrics of success for society along with fiscal prosperity. Healthcare IT News sat down with Havens, author of "Heartificial Intelligence: Embracing Humanity to Maximize Machines," to discuss these and other important issues surrounding AI and ethics.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found