Purves, Drew
Heterogenous graph neural networks for species distribution modeling
Harrell, Lauren, Kaeser-Chen, Christine, Ayan, Burcu Karagol, Anderson, Keith, Conserva, Michelangelo, Kleeman, Elise, Neumann, Maxim, Overlan, Matt, Chapman, Melissa, Purves, Drew
Species distribution models (SDMs) are necessary for measuring and predicting occurrences and habitat suitability of species and their relationship with environmental factors. We introduce a novel presence-only SDM with graph neural networks (GNN). In our model, species and locations are treated as two distinct node sets, and the learning task is predicting detection records as the edges that connect locations to species. Using GNN for SDM allows us to model fine-grained interactions between species and the environment. We evaluate the potential of this methodology on the six-region dataset compiled by National Center for Ecological Analysis and Synthesis (NCEAS) for benchmarking SDMs. For each of the regions, the heterogeneous GNN model is comparable to or outperforms previously-benchmarked single-species SDMs as well as a feed-forward neural network baseline model.
CURIE: Evaluating LLMs On Multitask Scientific Long Context Understanding and Reasoning
Cui, Hao, Shamsi, Zahra, Cheon, Gowoon, Ma, Xuejian, Li, Shutong, Tikhanovskaya, Maria, Norgaard, Peter, Mudur, Nayantara, Plomecka, Martyna, Raccuglia, Paul, Bahri, Yasaman, Albert, Victor V., Srinivasan, Pranesh, Pan, Haining, Faist, Philippe, Rohr, Brian, Statt, Michael J., Morris, Dan, Purves, Drew, Kleeman, Elise, Alcantara, Ruth, Abraham, Matthew, Mohammad, Muqthar, VanLee, Ean Phing, Jiang, Chenfei, Dorfman, Elizabeth, Kim, Eun-Ah, Brenner, Michael P, Jain, Viren, Ponda, Sameera, Venugopalan, Subhashini
Scientific problem-solving involves synthesizing information while applying expert knowledge. We introduce CURIE, a scientific long-Context Understanding,Reasoning and Information Extraction benchmark to measure the potential of Large Language Models (LLMs) in scientific problem-solving and assisting scientists in realistic workflows. This benchmark introduces ten challenging tasks with a total of 580 problems and solution pairs curated by experts in six disciplines - materials science, condensed matter physics, quantum computing, geospatial analysis, biodiversity, and proteins - covering both experimental and theoretical work-flows in science. We evaluate a range of closed and open LLMs on tasks in CURIE which requires domain expertise, comprehension of long in-context information,and multi-step reasoning. While Gemini Flash 2.0 and Claude-3 show consistent high comprehension across domains, the popular GPT-4o and command-R+ fail dramatically on protein sequencing tasks. With the best performance at 32% there is much room for improvement for all models. We hope that insights gained from CURIE can guide the future development of LLMs in sciences. Evaluation code and data are in https://github.com/google/curie