An NYU professor explains why it's so dangerous that Silicon Valley is building AI to make decisions without human values
In the absence of codified humanistic values within the big tech giants, personal experiences and ideals are driving decision-making. This is particularly dangerous when it comes to AI, because students, professors, researchers, employees, and managers are making millions of decisions every day, from seemingly insignificant (what database to use) to profound (who gets killed if an autonomous vehicle needs to crash). Artificial intelligence might be inspired by our human brains, but humans and AI make decisions and choices differently. Princeton professor Daniel Kahneman and Hebrew University of Jerusalem professor Amos Tversky spent years studying the human mind and how we make decisions, ultimately discovering that we have two systems of thinking: one that uses logic to analyze problems, and one that is automatic, fast, and nearly imperceptible to us. Kahneman describes this dual system in his award-winning book Thinking, Fast and Slow.
Feb-23-2019, 17:51:02 GMT
- Country:
- Asia
- China (0.05)
- Middle East > Israel
- Jerusalem District > Jerusalem (0.25)
- North America > United States
- California (0.40)
- New York (0.05)
- Asia
- Industry:
- Health & Medicine > Therapeutic Area (0.70)
- Information Technology (1.00)
- Technology: