Council Post: Regulating Artificial Intelligence: Why We Need Expert Input To Limit Risks
When science fiction writer Isaac Asimov introduced the Three Laws of Robotics to the world in 1942, practical robotic applications such as industrial pneumatic arms, all-transistor calculators and even the term "artificial intelligence" itself were all still a decade or two in the future. Asimov's laws boil down to three simple maxims: protect humans; obey humans; if it doesn't violate rule one or two, protect itself. Seems simple and sensible enough, yet the limits and internal tensions of these basic laws have inspired writers to dream up a wide range of science fiction dystopias, from 2001 to Blade Runner to the Terminator. And let's not forget to add Asimov's own collection of stories, I, Robot, which features the Three Laws, to the list. For business leaders, ushering in an AI-driven global calamity isn't a top-of-mind concern, but even avoiding smaller risks can be a major challenge.
Jun-16-2020, 21:38:25 GMT
- Country:
- Europe > France (0.05)
- North America
- Canada (0.05)
- United States
- California (0.05)
- Illinois (0.05)
- Industry:
- Government (1.00)
- Information Technology > Security & Privacy (1.00)
- Law (1.00)
- Technology:
- Information Technology > Artificial Intelligence
- Issues > Social & Ethical Issues (1.00)
- Robots (1.00)
- Information Technology > Artificial Intelligence