responsible ai gap
Gain trust by addressing the responsible AI gaps
Over the past couple of years AI risks and ethical considerations of AI are coming to the forefront. With the increased use of AI for contact tracing, workforce safety and planning, demand forecasting and supply chain disruption during the pandemic, a number of risks around privacy, bias, safety, robustness, and explainability of AI models have emerged. AI risk identification, assessment, and mitigation varies by the level of AI maturity, company size, industry sector and country of domicile. PwC's Global Responsible AI survey, of over 1,000 C-level executives, conducted in November 2020 reveals a number of insights as it relates to risks of AI and how companies are assessing, managing and mitigating these risks. The companies surveyed were in a number of industry sectors including financial service, technology, energy, utilities, and health.
Six Steps to Bridge the Responsible AI Gap
As artificial intelligence assumes a more central role in countless aspects of business and society, so has the need for ensuring its responsible use. AI has dramatically improved financial performance, employee experience, and product and service quality for millions of customers and citizens, but it has also inflicted harm. AI systems have offered lower credit card limits to women than men despite similar financial profiles. Digital ads have demonstrated racial bias in housing and mortgage offers. Users have tricked chatbots into making offensive and racist comments.
- North America > United States (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)