"Robots exist in an open world where you can't predict everything that's going to happen. The robot has to have some autonomy in order to act and react in a real situation. It needs to make decisions to protect itself, but it also needs to transfer control to humans when appropriate. You don't want a robot to drive off a ledge, for instance -- unless a human needs the robot to drive off the ledge. When those situations happen, you need to have smooth transfer of control from the robot to the appropriate human," Woods said.
Yet, only a few companies are publicly discussing their ongoing work in this area in a substantive, transparent, and proactive way. Many other companies, however, seem to fear negative consequences (like reputational risk) of sharing their vulnerabilities. Some companies are also waiting for a "finished product," wanting to be able to point to tangible, positive outcomes before they are ready to reveal their work.
"I've got a lot of friends who are gun owners. I've got a lot of friends who are NRA (National Rifle Association). We had responsible gun ownership, but I was taught the right way to respect that tool," he said. "At the same time, their petition that they were speaking about is a very good one. And I also fear that their campaign -- they have to watch that they don't get hijacked.
Ask a person on the street, and chances are they'll tell you they are both optimistic and anxious about AI. The conflicted perspective makes sense--AI is already appearing in ways that have the potential to both scare and inspire us. The 2018 Fjord trends for business, technology, and design suggest a potential path to alleviate those fears: Adopt a values-sensitive framework for Responsible AI.