Cranney, Jesse
Designing an Evaluation Framework for Large Language Models in Astronomy Research
Wu, John F., Hyk, Alina, McCormick, Kiera, Ye, Christine, Astarita, Simone, Baral, Elina, Ciuca, Jo, Cranney, Jesse, Field, Anjalie, Iyer, Kartheik, Koehn, Philipp, Kotler, Jenn, Kruk, Sandor, Ntampaka, Michelle, O'Neill, Charles, Peek, Joshua E. G., Sharma, Sanjib, Yunus, Mikaeel
Large Language Models (LLMs) are shifting how scientific research is done. It is imperative to understand how researchers interact with these models and how scientific sub-communities like astronomy might benefit from them. However, there is currently no standard for evaluating the use of LLMs in astronomy. Therefore, we present the experimental design for an evaluation study on how astronomy researchers interact with LLMs. We deploy a Slack chatbot that can answer queries from users via Retrieval-Augmented Generation (RAG); these responses are grounded in astronomy papers from arXiv. We record and anonymize user questions and chatbot answers, user upvotes and downvotes to LLM responses, user feedback to the LLM, and retrieved documents and similarity scores with the query. Our data collection method will enable future dynamic evaluations of LLM tools for astronomy.
AstroLLaMA: Towards Specialized Foundation Models in Astronomy
Nguyen, Tuan Dung, Ting, Yuan-Sen, Ciucă, Ioana, O'Neill, Charlie, Sun, Ze-Chang, Jabłońska, Maja, Kruk, Sandor, Perkowski, Ernest, Miller, Jack, Li, Jason, Peek, Josh, Iyer, Kartheik, Różański, Tomasz, Khetarpal, Pranav, Zaman, Sharaf, Brodrick, David, Méndez, Sergio J. Rodríguez, Bui, Thang, Goodman, Alyssa, Accomazzi, Alberto, Naiman, Jill, Cranney, Jesse, Schawinski, Kevin, UniverseTBD, null
Large language models excel in many human-language tasks but often falter in highly specialized domains like scholarly astronomy. To bridge this gap, we introduce AstroLLaMA, a 7-billion-parameter model fine-tuned from LLaMA-2 using over 300,000 astronomy abstracts from arXiv. Optimized for traditional causal language modeling, AstroLLaMA achieves a 30% lower perplexity than Llama-2, showing marked domain adaptation. Our model generates more insightful and scientifically relevant text completions and embedding extraction than state-of-the-arts foundation models despite having significantly fewer parameters. AstroLLaMA serves as a robust, domain-specific model with broad fine-tuning potential. Its public release aims to spur astronomy-focused research, including automatic paper summarization and conversational agent development.