AI accountability: Who's responsible when AI goes wrong?
AI systems sometimes run amok. One chatbot, designed by Microsoft to mimic a teenager, began spewing racist hate speech within hours of its release online. Microsoft immediately took the bot down. Another system, which Amazon designed to help its recruiting efforts but ultimately didn't release, inadvertently discriminated against female applicants. Other so-called "smart" systems have led to false arrests, biased bail amounts for criminal defendants, and even fatal car crashes. Experts expect to see more cases of problematic AI as organizations increasingly implement intelligent technology, sometimes doing so without adopting the proper governance in place.
Aug-20-2021, 22:05:17 GMT