States across the US are scrambling to figure out how to regulate self-driving cars, wearable technologies that track our health, smart homes that constantly monitor their infrastructure and the rest of the devices emerging from the so-called "internet of things" (IoT). The result is a smattering of incomplete and inconsistent law that could depress the upside of the technology without really addressing its risks. What's most notable about these early regulatory attempts is not that they are varied – that is to be expected. It's that the regulations deal mostly with physical safety, leaving privacy and cybersecurity issues almost wholly unexamined. This seems to be a pattern now, true too of drone regulation, where regulatory bodies have jurisdiction over physical threats, not informational ones.
New technologies often bring calls for new regulation. A current example is artificial intelligence (AI)--the creation of machines that think and act in ways that resemble human intelligence. There are plenty of AI optimists and AI pessimists. Both camps see the need for government intervention. Microsoft founder Bill Gates, who believes AI will "allow us to produce a lot more goods and services with less labor," foresees labor force dislocations and has suggested a robot tax.
On the heels of President Trump's State of the Union address making a case for increased border security and other initiatives, federal agencies have been moving quickly on artificial intelligence, Internet of Things and robocall spoofing. Meanwhile, the EU and Japan have entered into an agreement on cross-border data flows, and India is taking action on investment in e-commerce. Want our Strategic Policy Advisory team to take a look at other topics? Let us know in the comments! Earlier this week, President Trump signed the American AI Initiative, an Executive Order directing federal agencies to develop new AI R&D budgets, share resources with academia and industry, create educational programs to improve the AI talent pipeline, and develop regulatory guidances for AI implementation that balance innovation with civil liberties.
Federal guidance on artificial intelligence needs additions to ensure the U.S. has a seat at the international table. The rapid proliferation of applications of artificial intelligence and machine learning--or AI, for short--coupled with the potential for significant societal impact has spurred calls around the world for new regulation. The European Union and China are developing their own rules, and the Organization for Economic Cooperation and Development has developed principles that enjoy the support of its members plus a handful of other countries. In January, the U.S. Office of Management and Budget (OMB) also issued its own draft guidance, ensuring the United States a seat at the table during this ongoing, multi-year, international conversation. The U.S. guidance--covering "weak" or narrow AI applications of the kind we experience today--reflects a light-touch approach to regulation, consistent with a desire to reward U.S. ingenuity.
The Obama White House has had to reckon with cybersecurity like no other presidential administration in history, from China's 2009 hack of Google, to the Office of Personnel Management breach, to the rise of botnets built from dangerously insecure "internet-of-things" devices. Now, in the waning days of Obama's presidency, his team has a new plan to shore up America's protections from digital threats. Whether any of it happens, though, is up to Donald Trump. Late Friday afternoon last week, the White House's Commission on Enhancing National Cybersecurity released the results of a nine-month study of America's cybersecurity problems. Its recommendations, in a hundred-page report, cover a lot of ground.