Big Bang, Low Bar -- Risk Assessment in the Public Arena

Price, Huw

arXiv.org Artificial Intelligence 

Always keep an eye on ways that things could go badly wrong, even if they seem unlikely. The more disastrous a potential failure, the more improbable it needs to be, before we can safely ignore it. This principle may seem obvious, but it is easily overlooked in public discourse about risk - even, as we'll see, by well-qualified commentators, who should certainly know better. The present piece is prompted by neglect of the principle in recent discussions about the potential risks of artificial intelligence (AI). I don't think the failing is peculiar to this case, but recent debates in this area provide particularly stark examples of how easily the principle can be overlooked. Part of the problem, in my view, is that there isn't a catchy formulation of this safety principle, already on the tip of educated tongues. By contrast, consider the slogan'Correlation is not causation.' All scientists, science journalists, and policymakers know this phrase.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found