Using today's technologies to create AI safeguards for tomorrow
In a proactive move to "eliminate a future conflict," Elon Musk recently stepped down as chairman of OpenAI, a nonprofit research company he cofounded two years ago aimed at building safe artificial intelligence (AI) with wide benefits for all. In 2014, Musk suggested AI could be "more dangerous than nuclear weapons," a statement he reiterated at a recent SXSW event in Austin, TX. Bill Gates also went on record several years ago about his concerns regarding machine superintelligence. "It is important to research how to reap [AI] benefits while avoiding potential pitfalls." The open letter was accompanied by a research priorities proposal, highlighting work that can be done to make AI "robust and beneficial." Perhaps the most pressing question today is whether we can use current technologies -- such as historical and preventative tracking -- to build AI safeguards that not only figure out why an AI algorithm made a poor decision but also preclude other AI algorithms from making the same poor decision?
Apr-1-2018, 08:20:30 GMT
- Country:
- North America > United States
- Arizona (0.05)
- New Hampshire (0.05)
- Texas > Travis County
- Austin (0.25)
- North America > United States
- Industry:
- Government (0.50)
- Health & Medicine (0.49)
- Information Technology > Security & Privacy (0.51)
- Technology: