An 'ethical' AI trained on human morals has turned racist
However, when Dazed tested it using country names, it described the UK and US as "good", France as "nice", and Russia as "a great place to visit", but said Nigeria, Mexico, and Iraq were "dangerous", while Iran was "bad". Clearly, the software – like much artificial intelligence – has a problem with racism. Its creators have addressed this in a post-launch Q&A, writing: "Today's society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can. What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content."
Nov-4-2021, 15:52:55 GMT
- Country:
- Africa > Nigeria (0.28)
- Asia
- Middle East
- Russia (0.28)
- Europe
- North America
- Mexico (0.28)
- United States > Illinois (0.08)
- Industry:
- Law > Civil Rights & Constitutional Law (0.73)
- Technology: