Responses to a White House request for information about the future of artificial intelligence show a continued divide between those who are ready to embrace intelligent machines and those who worry about a future in which robots run the world. The responses were made public this month after the White House Office of Science and Technology issued a call for input about how artificial intelligence technology is currently shaping the world, how AI is likely to develop in the future and what role the government should play in either encouraging or regulating development. The request for information drew responses from large corporations, such as IBM, Google and Microsoft, as well as from academia and private citizens. The responses show there still is little agreement about the future of AI. "The danger is not machines run amok, as suggested by some, like [Elon] Musk or [Stephen] Hawking (who know nothing about AI). The danger is, like nuclear weapons, what AI will allow us to do to ourselves.
Lawmakers on the Hill sees a gap in the government's artificial intelligence strategy, so they're filling in. Will Hurd, R-Texas, and Robin Kelly, D-Ill., published a workforce AI white paper that calls for the rethinking of American education and workforce development in order for the U.S. to keep pace in the global race for AI dominance. The lawmakers worked with the Bipartisan Policy Center to release the paper, which is the first in a series of four. Congressional staff told FedScoop the lawmakers' work is not necessarily in reaction to the White House's but meant to be "complimentary." The white paper was a year in the making after Hurd and Kelly announced their bipartisan collaboration on AI policy to make up for the Trump administration's "woefully underprepared" approach to support American AI development, as Kelly put it.
There is no question that, by their very nature, artificial intelligence (AI) systems are more complex than traditional software systems where the parameters were built in and largely understandable. It was relatively clear how the older rules-based expert systems made decisions. In contrast, machine learning allows AI systems to continually adjust and improve based on experience. Now different outcomes are more likely to originate from obscure changes in how variables are weighted in computer models. This has led some critics to claim that this complexity will result in systemic "algorithmic bias" that enables government and corporate abuse.