Extinction Risks from AI: Invisible to Science?
Kovarik, Vojtech, van Merwijk, Christian, Mattsson, Ida
–arXiv.org Artificial Intelligence
In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart's Law as "Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity", and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart's Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be informative for evaluating specific arguments for Extinction-level Goodhart's Law. Since each of the conditions seems to significantly contribute to the complexity of the resulting model, formally evaluating the hypothesis might be exceedingly difficult. This raises the possibility that whether the risk of extinction from artificial intelligence is real or not, the underlying dynamics might be invisible to current scientific methods.
arXiv.org Artificial Intelligence
Feb-2-2024
- Country:
- Europe > United Kingdom
- England > Greater London > London (0.04)
- North America > United States
- New York > New York County
- New York City (0.04)
- Pennsylvania > Allegheny County
- Pittsburgh (0.04)
- New York > New York County
- Europe > United Kingdom
- Genre:
- Research Report (0.82)
- Industry:
- Leisure & Entertainment > Games > Computer Games (0.93)
- Technology: