Goto

Collaborating Authors

 embrace responsible ai


How IT leaders can embrace responsible AI - SiliconANGLE

#artificialintelligence

When artificial intelligence augments or even replaces human decisions, it amplifies good and bad outcomes alike. There are numerous risks and potential harms created by AI systems, including bias and discrimination, financial or reputational loss, lack of transparency and explainability, or invasions of security and privacy. Responsible AI enables the right outcomes by resolving dilemmas rooted in delivering value versus tolerating risks. Responsible AI must be a part of an organization's wider AI strategy. Here are the steps that chief information officer and information technology leaders, in partnership with data and analytics leadership, can take to progress their organization towards a vision of responsible AI.


This year's tech trends prove we need to embrace Responsible AI sooner--not later

#artificialintelligence

As AI plays a bigger role in systems that affect social outcomes--like criminal justice, education, hiring, or health care--it's clear that the creation and shape of AI decision-making needs to be taken seriously. What happens when algorithms decide whether or not you get a job, home, or loan?


This year's tech trends prove we need to embrace Responsible AI sooner--not later

#artificialintelligence

Ask a person on the street, and chances are they'll tell you they are both optimistic and anxious about AI. The conflicted perspective makes sense--AI is already appearing in ways that have the potential to both scare and inspire us. The 2018 Fjord trends for business, technology, and design suggest a potential path to alleviate those fears: Adopt a values-sensitive framework for Responsible AI. Thus far, AI optimists have had much to celebrate. Maybe they have it easy: Intuitively, we associate technology with progress--we generally believe that "better, faster, smarter" leads to improvement, efficiency, and enrichment.