Goto

Collaborating Authors

This is why we need to talk about responsible AI

#artificialintelligence

Yet, only a few companies are publicly discussing their ongoing work in this area in a substantive, transparent, and proactive way. Many other companies, however, seem to fear negative consequences (like reputational risk) of sharing their vulnerabilities. Some companies are also waiting for a "finished product," wanting to be able to point to tangible, positive outcomes before they are ready to reveal their work.


McConaughey Fears March for Our Lives Will Get 'Hijacked'

U.S. News

"I've got a lot of friends who are gun owners. I've got a lot of friends who are NRA (National Rifle Association). We had responsible gun ownership, but I was taught the right way to respect that tool," he said. "At the same time, their petition that they were speaking about is a very good one. And I also fear that their campaign -- they have to watch that they don't get hijacked.


This year's tech trends prove we need to embrace Responsible AI sooner--not later

#artificialintelligence

As AI plays a bigger role in systems that affect social outcomes--like criminal justice, education, hiring, or health care--it's clear that the creation and shape of AI decision-making needs to be taken seriously. What happens when algorithms decide whether or not you get a job, home, or loan?


This year's tech trends prove we need to embrace Responsible AI sooner--not later

#artificialintelligence

Ask a person on the street, and chances are they'll tell you they are both optimistic and anxious about AI. The conflicted perspective makes sense--AI is already appearing in ways that have the potential to both scare and inspire us. The 2018 Fjord trends for business, technology, and design suggest a potential path to alleviate those fears: Adopt a values-sensitive framework for Responsible AI.


Responsible AI and Government: High Time to Open Discussions?

#artificialintelligence

Recently, Gartner released a series of Predicts 2021 research reports, including one that highlights the serious, wide-reaching ethical and social problems it predicts artificial intelligence (AI) to cause in the next several years. The race to digital transformation and abundance of data has coerced companies to invest in artificial intelligence technologies. And with that, the concept of leveraging responsible AI took central stage in discussions between government, enterprises and other tech purists and critics. A quick search trends shows that the words like "Ethical AI", and "Responsible AI" have gained popularity in the past five years. But what is the reason behind it? Currently, presence of bias in training data for artificial intelligence models and lack of transparency (black box) threaten the possibility of using AI for good.