An 'ethical' AI trained on human morals has turned racist

#artificialintelligence 

However, when Dazed tested it using country names, it described the UK and US as "good", France as "nice", and Russia as "a great place to visit", but said Nigeria, Mexico, and Iraq were "dangerous", while Iran was "bad". Clearly, the software – like much artificial intelligence – has a problem with racism. Its creators have addressed this in a post-launch Q&A, writing: "Today's society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can. What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content."

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found