Wu, Jiaman
Building Machine Learning Challenges for Anomaly Detection in Science
Campolongo, Elizabeth G., Chou, Yuan-Tang, Govorkova, Ekaterina, Bhimji, Wahid, Chao, Wei-Lun, Harris, Chris, Hsu, Shih-Chieh, Lapp, Hilmar, Neubauer, Mark S., Namayanja, Josephine, Subramanian, Aneesh, Harris, Philip, Anand, Advaith, Carlyn, David E., Ghosh, Subhankar, Lawrence, Christopher, Moreno, Eric, Raikman, Ryan, Wu, Jiaman, Zhang, Ziheng, Adhi, Bayu, Gharehtoragh, Mohammad Ahmadi, Monsalve, Saúl Alonso, Babicz, Marta, Baig, Furqan, Banerji, Namrata, Bardon, William, Barna, Tyler, Berger-Wolf, Tanya, Dieng, Adji Bousso, Brachman, Micah, Buat, Quentin, Hui, David C. Y., Cao, Phuong, Cerino, Franco, Chang, Yi-Chun, Chaulagain, Shivaji, Chen, An-Kai, Chen, Deming, Chen, Eric, Chou, Chia-Jui, Ciou, Zih-Chen, Cochran-Branson, Miles, Choi, Artur Cordeiro Oudot, Coughlin, Michael, Cremonesi, Matteo, Dadarlat, Maria, Darch, Peter, Desai, Malina, Diaz, Daniel, Dillmann, Steven, Duarte, Javier, Duporge, Isla, Ekka, Urbas, Heravi, Saba Entezari, Fang, Hao, Flynn, Rian, Fox, Geoffrey, Freed, Emily, Gao, Hang, Gao, Jing, Gonski, Julia, Graham, Matthew, Hashemi, Abolfazl, Hauck, Scott, Hazelden, James, Peterson, Joshua Henry, Hoang, Duc, Hu, Wei, Huennefeld, Mirco, Hyde, David, Janeja, Vandana, Jaroenchai, Nattapon, Jia, Haoyi, Kang, Yunfan, Kholiavchenko, Maksim, Khoda, Elham E., Kim, Sangin, Kumar, Aditya, Lai, Bo-Cheng, Le, Trung, Lee, Chi-Wei, Lee, JangHyeon, Lee, Shaocheng, van der Lee, Suzan, Lewis, Charles, Li, Haitong, Li, Haoyang, Liao, Henry, Liu, Mia, Liu, Xiaolin, Liu, Xiulong, Loncar, Vladimir, Lyu, Fangzheng, Makarov, Ilya, Mao, Abhishikth Mallampalli Chen-Yu, Michels, Alexander, Migala, Alexander, Mokhtar, Farouk, Morlighem, Mathieu, Namgung, Min, Novak, Andrzej, Novick, Andrew, Orsborn, Amy, Padmanabhan, Anand, Pan, Jia-Cheng, Pandya, Sneh, Pei, Zhiyuan, Peixoto, Ana, Percivall, George, Leung, Alex Po, Purushotham, Sanjay, Que, Zhiqiang, Quinnan, Melissa, Ranjan, Arghya, Rankin, Dylan, Reissel, Christina, Riedel, Benedikt, Rubenstein, Dan, Sasli, Argyro, Shlizerman, Eli, Singh, Arushi, Singh, Kim, Sokol, Eric R., Sorensen, Arturo, Su, Yu, Taheri, Mitra, Thakkar, Vaibhav, Thomas, Ann Mariam, Toberer, Eric, Tsai, Chenghan, Vandewalle, Rebecca, Verma, Arjun, Venterea, Ricco C., Wang, He, Wang, Jianwu, Wang, Sam, Wang, Shaowen, Watts, Gordon, Weitz, Jason, Wildridge, Andrew, Williams, Rebecca, Wolf, Scott, Xu, Yue, Yan, Jianqi, Yu, Jai, Zhang, Yulei, Zhao, Haoran, Zhao, Ying, Zhong, Yibo
Scientific discoveries are often made by finding a pattern or object that was not predicted by the known rules of science. Oftentimes, these anomalous events or objects that do not conform to the norms are an indication that the rules of science governing the data are incomplete, and something new needs to be present to explain these unexpected outliers. The challenge of finding anomalies can be confounding since it requires codifying a complete knowledge of the known scientific behaviors and then projecting these known behaviors on the data to look for deviations. When utilizing machine learning, this presents a particular challenge since we require that the model not only understands scientific data perfectly but also recognizes when the data is inconsistent and out of the scope of its trained behavior. In this paper, we present three datasets aimed at developing machine learning-based anomaly detection for disparate scientific domains covering astrophysics, genomics, and polar science. We present the different datasets along with a scheme to make machine learning challenges around the three datasets findable, accessible, interoperable, and reusable (FAIR). Furthermore, we present an approach that generalizes to future machine learning challenges, enabling the possibility of large, more compute-intensive challenges that can ultimately lead to scientific discovery.
BioCLIP: A Vision Foundation Model for the Tree of Life
Stevens, Samuel, Wu, Jiaman, Thompson, Matthew J, Campolongo, Elizabeth G, Song, Chan Hee, Carlyn, David Edward, Dong, Li, Dahdul, Wasila M, Stewart, Charles, Berger-Wolf, Tanya, Chao, Wei-Lun, Su, Yu
Images of the natural world, collected by a variety of cameras, from drones to individual phones, are increasingly abundant sources of biological information. There is an explosion of computational methods and tools, particularly computer vision, for extracting biologically relevant information from images for science and conservation. Yet most of these are bespoke approaches designed for a specific task and are not easily adaptable or extendable to new questions, contexts, and datasets. A vision model for general organismal biology questions on images is of timely need. To approach this, we curate and release TreeOfLife-10M, the largest and most diverse ML-ready dataset of biology images. We then develop BioCLIP, a foundation model for the tree of life, leveraging the unique properties of biology captured by TreeOfLife-10M, namely the abundance and variety of images of plants, animals, and fungi, together with the availability of rich structured biological knowledge. We rigorously benchmark our approach on diverse fine-grained biology classification tasks, and find that BioCLIP consistently and substantially outperforms existing baselines (by 17% to 20% absolute). Intrinsic evaluation reveals that BioCLIP has learned a hierarchical representation conforming to the tree of life, shedding light on its strong generalizability. Our code, models and data will be made available at https://github.com/Imageomics/bioclip.
LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models
Song, Chan Hee, Wu, Jiaman, Washington, Clayton, Sadler, Brian M., Chao, Wei-Lun, Su, Yu
This study focuses on using large language models (LLMs) as a planner for embodied agents that can follow natural language instructions to complete complex tasks in a visually-perceived environment. The high data cost and poor sample efficiency of existing methods hinders the development of versatile agents that are capable of many tasks and can learn new tasks quickly. In this work, we propose a novel method, LLM-Planner, that harnesses the power of large language models to do few-shot planning for embodied agents. We further propose a simple but effective way to enhance LLMs with physical grounding to generate and update plans that are grounded in the current environment. Experiments on the ALFRED dataset show that our method can achieve very competitive few-shot performance: Despite using less than 0.5% of paired training data, LLM-Planner achieves competitive performance with recent baselines that are trained using the full training data. Existing methods can barely complete any task successfully under the same few-shot setting. Our work opens the door for developing versatile and sample-efficient embodied agents that can quickly learn many tasks. Website: https://dki-lab.github.io/LLM-Planner
A Slot Is Not Built in One Utterance: Spoken Language Dialogs with Sub-Slots
Zhang, Sai, Hu, Yuwei, Wu, Yuchuan, Wu, Jiaman, Li, Yongbin, Sun, Jian, Yuan, Caixia, Wang, Xiaojie
A slot value might be provided segment by segment over multiple-turn interactions in a dialog, especially for some important information such as phone numbers and names. It is a common phenomenon in daily life, but little attention has been paid to it in previous work. To fill the gap, this paper defines a new task named Sub-Slot based Task-Oriented Dialog (SSTOD) and builds a Chinese dialog dataset SSD for boosting research on SSTOD. The dataset includes a total of 40K dialogs and 500K utterances from four different domains: Chinese names, phone numbers, ID numbers and license plate numbers. The data is well annotated with sub-slot values, slot values, dialog states and actions. We find some new linguistic phenomena and interactive manners in SSTOD which raise critical challenges of building dialog agents for the task. We test three state-of-the-art dialog models on SSTOD and find they cannot handle the task well on any of the four domains. We also investigate an improved model by involving slot knowledge in a plug-in manner. More work should be done to meet the new challenges raised from SSTOD which widely exists in real-life applications. The dataset and code are publicly available via https://github.com/shunjiu/SSTOD.