Artificial Stupidity: Learning To Trust Artificial Intelligence (Sometimes)
A young Marine reaches out for a hand-launched drone. In science fiction and real life alike, there are plenty of horror stories where humans trust artificial intelligence too much. They range from letting the fictional SkyNet control our nuclear weapons to letting Patriots shoot down friendly planes or letting Tesla Autopilot crash into a truck. As conflict on earth, in space, and in cyberspace becomes increasingly fast-paced and complex, the Pentagon's Third Offset initiative is counting on artificial intelligence to help commanders, combatants, and analysts chart a course through chaos -- what we've dubbed the War Algorithm (click here for the full series). But if the software itself is too complex, too opaque, or too unpredictable for its users to understand, they'll just turn it off and do things manually.
Jul-8-2017, 10:05:09 GMT
- Country:
- Asia
- Europe > Russia (0.05)
- North America > United States (0.29)
- Industry:
- Government > Military (1.00)
- Health & Medicine > Therapeutic Area
- Psychiatry/Psychology > Mental Health (0.40)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (0.83)
- Machine Learning (1.00)
- Natural Language (0.70)
- Robots (0.70)
- Information Technology > Artificial Intelligence