The problem with the Stanford report's sanguine estimate on artificial intelligence

#artificialintelligence 

Stanford has undertaken an important effort: envisioning the implications of artificial intelligence over a 100-year span, to "anticipate how the effects of artificial intelligence will ripple through every aspect of how people work, live, and play." But there is a problem, potentially fundamental enough that the team may want to revisit its first report or adjust its approach as it goes forward. This is the report's relatively weak coverage of the urban, human security implications of AI. According to the purpose statement, this first study focuses on the implications of AI in 2030 in the "typical North American city." I suppose the thin treatment of security may derive from the huge assumption that North American cities will remain peaceful and secure, and thus AI and intelligent machines won't carry significant human security implications.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found