Not enough data to create a plot.
Try a different view from the menu above.
Pistillo, Matteo
Defending Compute Thresholds Against Legal Loopholes
Pistillo, Matteo, Villalobos, Pablo
Existing legal frameworks on AI rely on training compute thresholds as a proxy to identify potentially-dangerous AI models and trigger increased regulatory attention. In the United States, Section 4.2(a) of Executive Order 14110 instructs the Secretary of Commerce to require extensive reporting from developers of AI models above a certain training compute threshold. In the European Union, Article 51 of the AI Act establishes a presumption that AI models above a certain compute threshold have high impact capabilities and hence pose systemic risk, thus subjecting their developers to several obligations including capability evaluations, reporting, and incident monitoring. In this paper, we examine some enhancement techniques that are capable of decreasing training compute usage while preserving, or even increasing, model capabilities. Since training compute thresholds rely on training compute as a metric and trigger for increased regulatory attention, these capability-enhancing and compute-saving techniques could constitute a legal loophole to existing training compute thresholds. In particular, we concentrate on four illustrative techniques (fine-tuning, model reuse, model expansion, and above compute-optimal inference compute) with the goal of furthering the conversation about their implications on training compute thresholds as a legal mechanism and advancing policy recommendations that could address the relevant legal loopholes.
Pre-Deployment Information Sharing: A Zoning Taxonomy for Precursory Capabilities
Pistillo, Matteo, Stix, Charlotte
There is a growing consensus that information is the "lifeblood of good governance" (Kolt et al., 2024) and that information sharing should be one of the "natural initial target[s]" of AI governance (Bommasani et al., 2024). Up-to-date and reliable information about AI systems' capabilities and how capabilities will develop in the future can help developers, governments, and researchers advance safety evaluations (Frontier Model Forum, 2024), develop best practices (UK DSIT, 2023), and respond effectively to the new risks posed by frontier AI (Kolt et al., 2024). Information sharing also supports regulatory visibility (Anderljung et al., 2023) and can thus enable better-informed AI governance (O'Brien et al., 2024). Further, access to knowledge about AI systems' potential risks allows AI systems claims to be scrutinized more effectively (Brundage et al., 2020). By contrast, information asymmetries could lead regulators to miscalibrated over-regulation--or under-regulation--of AI (Ball & Kokotajlo, 2024) and could contribute to the "pacing problem," a situation in which government oversight consistently lags behind technology development (Marchant et al., 2011). In short, there is a strong case for information sharing being one "key to making AI go well" (Ball & Kokotajlo, 2024). The Frontier AI Safety Commitments ("FAISC") are an important step towards more comprehensive information sharing by AI developers.