Zhu, Ruike
RAS: Retrieval-And-Structuring for Knowledge-Intensive LLM Generation
Jiang, Pengcheng, Cao, Lang, Zhu, Ruike, Jiang, Minhao, Zhang, Yunyi, Sun, Jimeng, Han, Jiawei
Retrieval-augmented language models often struggle with knowledge-intensive tasks due to inefficient retrieval, unstructured knowledge integration, and single-pass architectures. We present Retrieval-And-Structuring (RAS), a novel framework that dynamically constructs and reasons over query-specific knowledge graphs through iterative retrieval and structuring. RAS introduces four key technical innovations: (1) a themescoped retrieval mechanism that efficiently narrows the search space while maintaining retrieval quality, (2) an action planning module that determines knowledge needs and generates focused sub-queries, (3) a dynamic knowledge structuring approach that converts retrieved text into an evolving knowledge graph, and (4) a graph-augmented answering component that leverages the accumulated structured information. Our framework achieves state-of-the-art performance, surpassing leading baselines by 6.4% with open-source language models and 7.0% with proprietary models on seven knowledge-intensive generation datasets across all evaluation metrics. Detailed ablation studies verify the contribution of each technical component to the overall system performance.
Diff-Ensembler: Learning to Ensemble 2D Diffusion Models for Volume-to-Volume Medical Image Translation
Zhu, Xiyue, Kwark, Dou Hoon, Zhu, Ruike, Hong, Kaiwen, Tao, Yiqi, Luo, Shirui, Li, Yudu, Liang, Zhi-Pei, Kindratenko, Volodymyr
Despite success in volume-to-volume translations in medical images, most existing models struggle to effectively capture the inherent volumetric distribution using 3D representations. The current state-of-the-art approach combines multiple 2D-based networks through weighted averaging, thereby neglecting the 3D spatial structures. Directly training 3D models in medical imaging presents significant challenges due to high computational demands and the need for large-scale datasets. To address these challenges, we introduce Diff-Ensembler, a novel hybrid 2D-3D model for efficient and effective volumetric translations by ensembling perpendicularly trained 2D diffusion models with a 3D network in each diffusion step. Moreover, our model can naturally be used to ensemble diffusion models conditioned on different modalities, allowing flexible and accurate fusion of input conditions. Extensive experiments demonstrate that Diff-Ensembler attains superior accuracy and volumetric realism in 3D medical image super-resolution and modality translation. We further demonstrate the strength of our model's volumetric realism using tumor segmentation as a downstream task.
FAIR AI Models in High Energy Physics
Duarte, Javier, Li, Haoyang, Roy, Avik, Zhu, Ruike, Huerta, E. A., Diaz, Daniel, Harris, Philip, Kansal, Raghav, Katz, Daniel S., Kavoori, Ishaan H., Kindratenko, Volodymyr V., Mokhtar, Farouk, Neubauer, Mark S., Park, Sang Eon, Quinnan, Melissa, Rusack, Roger, Zhao, Zhizhen
The findable, accessible, interoperable, and reusable (FAIR) data principles provide a framework for examining, evaluating, and improving how data is shared to facilitate scientific discovery. Generalizing these principles to research software and other digital products is an active area of research. Machine learning (ML) models -- algorithms that have been trained on data without being explicitly programmed -- and more generally, artificial intelligence (AI) models, are an important target for this because of the ever-increasing pace with which AI is transforming scientific domains, such as experimental high energy physics (HEP). In this paper, we propose a practical definition of FAIR principles for AI models in HEP and describe a template for the application of these principles. We demonstrate the template's use with an example AI model applied to HEP, in which a graph neural network is used to identify Higgs bosons decaying to two bottom quarks. We report on the robustness of this FAIR AI model, its portability across hardware architectures and software frameworks, and its interpretability.
FAIR for AI: An interdisciplinary and international community building perspective
Huerta, E. A., Blaiszik, Ben, Brinson, L. Catherine, Bouchard, Kristofer E., Diaz, Daniel, Doglioni, Caterina, Duarte, Javier M., Emani, Murali, Foster, Ian, Fox, Geoffrey, Harris, Philip, Heinrich, Lukas, Jha, Shantenu, Katz, Daniel S., Kindratenko, Volodymyr, Kirkpatrick, Christine R., Lassila-Perini, Kati, Madduri, Ravi K., Neubauer, Mark S., Psomopoulos, Fotis E., Roy, Avik, Rübel, Oliver, Zhao, Zhizhen, Zhu, Ruike
A foundational set of findable, accessible, interoperable, and reusable (FAIR) principles were proposed in 2016 as prerequisites for proper data management and stewardship, with the goal of enabling the reusability of scholarly data. The principles were also meant to apply to other digital assets, at a high level, and over time, the FAIR guiding principles have been re-interpreted or extended to include the software, tools, algorithms, and workflows that produce data. FAIR principles are now being adapted in the context of AI models and datasets. Here, we present the perspectives, vision, and experiences of researchers from different countries, disciplines, and backgrounds who are leading the definition and adoption of FAIR principles in their communities of practice, and discuss outcomes that may result from pursuing and incentivizing FAIR AI research. The material for this report builds on the FAIR for AI Workshop held at Argonne National Laboratory on June 7, 2022.