No Testing, No Artificial Intelligence!

#artificialintelligence 

In March of 2018, an Uber self-driving car killed for the first time: It did not recognize a pedestrian crossing the road. COMPAS, a machine-learning-based computer software system assisting judges in 12 courts in the US, was found by ProPublica to have harmful bias: It was discriminating between black and white people, suggesting to judges that the former were twice as likely to commit another crime than the latter and recommending longer detention periods for them before trial. I could continue with more examples of how AI can become harmful. Enterprises are infusing their enterprise applications with AI technology and building new AI-based digital experiences to transform business and accelerate their digital transformation programs. But there is a chance that all these positives about AI could end, especially if we continue to see examples like this of delivering poor-quality, untested AI or AI that's not properly tested for businesses and consumers.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found