Pilditch, Toby
An International Consortium for Evaluations of Societal-Scale Risks from Advanced AI
Gruetzemacher, Ross, Chan, Alan, Frazier, Kevin, Manning, Christy, Los, Štěpán, Fox, James, Hernández-Orallo, José, Burden, John, Franklin, Matija, Ghuidhir, Clíodhna Ní, Bailey, Mark, Eth, Daniel, Pilditch, Toby, Kilian, Kyle
Given rapid progress toward advanced AI and risks from frontier AI systems (advanced AI systems pushing the boundaries of the AI capabilities frontier), the creation and implementation of AI governance and regulatory schemes deserves prioritization and substantial investment. However, the status quo is untenable and, frankly, dangerous. A regulatory gap has permitted AI labs to conduct research, development, and deployment activities with minimal oversight. In response, frontier AI system evaluations have been proposed as a way of assessing risks from the development and deployment of frontier AI systems. Yet, the budding AI risk evaluation ecosystem faces significant coordination challenges, such as a limited diversity of evaluators, suboptimal allocation of effort, and perverse incentives. This paper proposes a solution in the form of an international consortium for AI risk evaluations, comprising both AI developers and third-party AI risk evaluators. Such a consortium could play a critical role in international efforts to mitigate societal-scale risks from advanced AI, including in managing responsible scaling policies and coordinated evaluation-based risk response. In this paper, we discuss the current evaluation ecosystem and its shortcomings, propose an international consortium for advanced AI risk evaluations, discuss issues regarding its implementation, discuss lessons that can be learnt from previous international institutions and existing proposals for international AI governance institutions, and, finally, we recommend concrete steps to advance the establishment of the proposed consortium: (i) solicit feedback from stakeholders, (ii) conduct additional research, (iii) conduct a workshop(s) for stakeholders, (iv) analyze feedback and create final proposal, (v) solicit funding, and (vi) create a consortium.
Towards a Standard Cognitive Framework for Socially Oriented, Adaptive, and Generative Human-Environment Agents
Madsen, Jens K. (University of Oxford) | Bailey, Richard (University of Oxford) | Carrella, Ernesto (University of Oxford) | Pilditch, Toby (University College London)
While several unified theories of cognition have been proposed, no framework has been established with the same degree of universal agreement as in biology and physics. A universal model of cognition is needed to direct research, push cognitive sciences, and test more or less realistic interventions on shifting environments. Here, we propose the necessary components for modelling a socially oriented, generative, and adaptive agent. We argue such a model requires modules for information input, management, storage, and use in order to grow an agent capable of human-like adaptive, socio-cultural behavioural strategies. We further argue that such an agent may be tested in different contexts through Agent-Based Modelling.