Goto

Collaborating Authors

 aisis


Which Information should the UK and US AISI share with an International Network of AISIs? Opportunities, Risks, and a Tentative Proposal

Thurnherr, Lara

arXiv.org Artificial Intelligence

The UK AI Safety Institute (UK AISI) and its parallel organisation in the United States (US AISI) take up a unique position in the recently established International Network of AISIs. Both are in jurisdictions with frontier AI companies and are assuming leading roles in the international conversation on AI Safety. This paper argues that it is in the interest of both institutions to share specific categories of information with the International Network of AISIs, deliberately abstain from sharing others and carefully evaluate sharing some categories on a case by case basis, according to domestic priorities. The paper further proposes a provisional framework with which policymakers and researchers can distinguish between these three cases, taking into account the potential benefits and risks of sharing specific categories of information, ranging from pre-deployment evaluation results to evaluation standards. In an effort to further improve the research on AI policy relevant information sharing decisions, the paper emphasises the importance of continuously monitoring fluctuating factors influencing sharing decisions and a more in-depth analysis of specific policy relevant information categories and additional factors to consider in future research.


Pre-Deployment Information Sharing: A Zoning Taxonomy for Precursory Capabilities

Pistillo, Matteo, Stix, Charlotte

arXiv.org Artificial Intelligence

There is a growing consensus that information is the "lifeblood of good governance" (Kolt et al., 2024) and that information sharing should be one of the "natural initial target[s]" of AI governance (Bommasani et al., 2024). Up-to-date and reliable information about AI systems' capabilities and how capabilities will develop in the future can help developers, governments, and researchers advance safety evaluations (Frontier Model Forum, 2024), develop best practices (UK DSIT, 2023), and respond effectively to the new risks posed by frontier AI (Kolt et al., 2024). Information sharing also supports regulatory visibility (Anderljung et al., 2023) and can thus enable better-informed AI governance (O'Brien et al., 2024). Further, access to knowledge about AI systems' potential risks allows AI systems claims to be scrutinized more effectively (Brundage et al., 2020). By contrast, information asymmetries could lead regulators to miscalibrated over-regulation--or under-regulation--of AI (Ball & Kokotajlo, 2024) and could contribute to the "pacing problem," a situation in which government oversight consistently lags behind technology development (Marchant et al., 2011). In short, there is a strong case for information sharing being one "key to making AI go well" (Ball & Kokotajlo, 2024). The Frontier AI Safety Commitments ("FAISC") are an important step towards more comprehensive information sharing by AI developers.



'We got bored waiting for Oasis to re-form': AIsis, the band fronted by an AI Liam Gallagher

The Guardian

Before you do anything else with your day, you need to listen to this. A new "lost" Oasis album has been released, from the period between their third album, 1997's Be Here Now, and their fourth, 2000's Standing on the Shoulder of Giants. It was created by AI – or at least, it's an AI Liam Gallagher doing its best "hellooooos" and "sun-shiiiines" over a real band. But the eight songs, including Out of My Mind, Coming of Age and Forever, are practically indistinguishable from the real thing, with some seriously catchy melodies that give every post-What's the Story album – not to mention the whole of Liam and Noel's solo catalogues – a run for their money. How do you get a computer to sing like Liam?