Pre-Deployment Information Sharing: A Zoning Taxonomy for Precursory Capabilities

Pistillo, Matteo, Stix, Charlotte

arXiv.org Artificial Intelligence 

There is a growing consensus that information is the "lifeblood of good governance" (Kolt et al., 2024) and that information sharing should be one of the "natural initial target[s]" of AI governance (Bommasani et al., 2024). Up-to-date and reliable information about AI systems' capabilities and how capabilities will develop in the future can help developers, governments, and researchers advance safety evaluations (Frontier Model Forum, 2024), develop best practices (UK DSIT, 2023), and respond effectively to the new risks posed by frontier AI (Kolt et al., 2024). Information sharing also supports regulatory visibility (Anderljung et al., 2023) and can thus enable better-informed AI governance (O'Brien et al., 2024). Further, access to knowledge about AI systems' potential risks allows AI systems claims to be scrutinized more effectively (Brundage et al., 2020). By contrast, information asymmetries could lead regulators to miscalibrated over-regulation--or under-regulation--of AI (Ball & Kokotajlo, 2024) and could contribute to the "pacing problem," a situation in which government oversight consistently lags behind technology development (Marchant et al., 2011). In short, there is a strong case for information sharing being one "key to making AI go well" (Ball & Kokotajlo, 2024). The Frontier AI Safety Commitments ("FAISC") are an important step towards more comprehensive information sharing by AI developers.