Goto

Collaborating Authors

 foundation ai model


Contemporary AI foundation models increase biological weapons risk

Brent, Roger, McKelvey, T. Greg Jr

arXiv.org Artificial Intelligence

The rapid advancement of artificial intelligence has raised concerns about its potential to facilitate biological weapons development. We argue existing safety assessments of contemporary foundation AI models underestimate this risk, largely due to flawed assumptions and inadequate evaluation methods. First, assessments mistakenly assume biological weapons development requires tacit knowledge, or skills gained through hands-on experience that cannot be easily verbalized. Second, they rely on imperfect benchmarks that overlook how AI can uplift both nonexperts and already-skilled individuals. To challenge the tacit knowledge assumption, we examine cases where individuals without formal expertise, including a 2011 Norwegian ultranationalist who synthesized explosives, successfully carried out complex technical tasks. We also review efforts to document pathogen construction processes, highlighting how such tasks can be conveyed in text. We identify "elements of success" for biological weapons development that large language models can describe in words, including steps such as acquiring materials and performing technical procedures. Applying this framework, we find that advanced AI models Llama 3.1 405B, ChatGPT-4o, and Claude 3.5 Sonnet can accurately guide users through the recovery of live poliovirus from commercially obtained synthetic DNA, challenging recent claims that current models pose minimal biosecurity risk. We advocate for improved benchmarks, while acknowledging the window for meaningful implementation may have already closed.


What Are Foundation AI Models Exactly? - Datafloq

#artificialintelligence

While organizations around the globe have long gone on an AI investment spree, the number of artificial intelligence projects that make it from prototypes to production still fluctuates around 53%. Experts believe this often happens due to lacking tech skills, human resources, and tools to scale isolated AI proof of concepts (PoCs) across other use cases. Foundation models – i.e., large machine learning models trained on vast volumes of unlabelled data under the guidance of skilled AI consultants – may be the ultimate answer to the daunting AI scalability and cost problems. Your company could use such models as a starting point to enhance or automate various tasks, from converting paper-based documents into editable text files to uncovering customer sentiment in social media reviews. And build on your AI excellence from there, adapting foundation models for future tasks and use cases. This language model has absorbed tremendous volumes of conversational text using the supervised learning and, at the fine-tuning stage, the reinforcement learning from human feedback (RLHF) approaches.