Plotting

 Xue, Tengfei


Enhancing Advanced Visual Reasoning Ability of Large Language Models

arXiv.org Artificial Intelligence

Recent advancements in Vision-Language (VL) research have sparked new benchmarks for complex visual reasoning, challenging models' advanced reasoning ability. Traditional Vision-Language Models (VLMs) perform well in visual perception tasks while struggling with complex reasoning scenarios. Conversely, Large Language Models (LLMs) demonstrate robust text reasoning capabilities; however, they lack visual acuity. To bridge this gap, we propose Complex Visual Reasoning Large Language Models (CVR-LLM), capitalizing on VLMs' visual perception proficiency and LLMs' extensive reasoning capability. Unlike recent multimodal large language models (MLLMs) that require a projection layer, our approach transforms images into detailed, context-aware descriptions using an iterative self-refinement loop and leverages LLMs' text knowledge for accurate predictions without extra training. We also introduce a novel multi-modal in-context learning (ICL) methodology to enhance LLMs' contextual understanding and reasoning. Additionally, we introduce Chain-of-Comparison (CoC), a step-by-step comparison technique enabling contrasting various aspects of predictions. Our CVR-LLM presents the first comprehensive study across a wide array of complex visual reasoning tasks and achieves SOTA performance among all.


NinjaLLM: Fast, Scalable and Cost-effective RAG using Amazon SageMaker and AWS Trainium and Inferentia2

arXiv.org Artificial Intelligence

Retrieval-augmented generation (RAG) techniques are widely used today to retrieve and present information in a conversational format. This paper presents a set of enhancements to traditional RAG techniques, focusing on large language models (LLMs) fine-tuned and hosted on AWS Trainium and Inferentia2 AI chips via SageMaker. These chips are characterized by their elasticity, affordability, and efficient performance for AI compute tasks. Besides enabling deployment on these chips, this work aims to improve tool usage, add citation capabilities, and mitigate the risks of hallucinations and unsafe responses due to context bias. We benchmark our RAG system's performance on the Natural Questions and HotPotQA datasets, achieving an accuracy of 62% and 59% respectively, exceeding other models such as DBRX and Mixtral Instruct.


A Deep Network for Explainable Prediction of Non-Imaging Phenotypes using Anatomical Multi-View Data

arXiv.org Artificial Intelligence

Large datasets often contain multiple distinct feature sets, or views, that offer complementary information that can be exploited by multi-view learning methods to improve results. We investigate anatomical multi-view data, where each brain anatomical structure is described with multiple feature sets. In particular, we focus on sets of white matter microstructure and connectivity features from diffusion MRI, as well as sets of gray matter area and thickness features from structural MRI. We investigate machine learning methodology that applies multi-view approaches to improve the prediction of non-imaging phenotypes, including demographics (age), motor (strength), and cognition (picture vocabulary). We present an explainable multi-view network (EMV-Net) that can use different anatomical views to improve prediction performance. In this network, each individual anatomical view is processed by a view-specific feature extractor and the extracted information from each view is fused using a learnable weight. This is followed by a wavelet transform-based module to obtain complementary information across views which is then applied to calibrate the view-specific information. Additionally, the calibrator produces an attention-based calibration score to indicate anatomical structures' importance for interpretation.


Age Prediction Performance Varies Across Deep, Superficial, and Cerebellar White Matter Connections

arXiv.org Artificial Intelligence

The brain's white matter (WM) undergoes developmental and degenerative processes during the human lifespan. To investigate the relationship between WM anatomical regions and age, we study diffusion magnetic resonance imaging tractography that is finely parcellated into fiber clusters in the deep, superficial, and cerebellar WM. We propose a deep-learning-based age prediction model that leverages large convolutional kernels and inverted bottlenecks. We improve performance using novel discrete multi-faceted mix data augmentation and a novel prior-knowledge-based loss function that encourages age predictions in the expected range. We study a dataset of 965 healthy young adults (22-37 years) derived from the Human Connectome Project (HCP). Experimental results demonstrate that the proposed model achieves a mean absolute error of 2.59 years and outperforms compared methods. We find that the deep WM is the most informative for age prediction in this cohort, while the superficial WM is the least informative. Overall, the most predictive WM tracts are the thalamo-frontal tract from the deep WM and the intracerebellar input and Purkinje tract from the cerebellar WM.


A Registration- and Uncertainty-based Framework for White Matter Tract Segmentation With Only One Annotated Subject

arXiv.org Artificial Intelligence

White matter (WM) tract segmentation based on diffusion magnetic resonance imaging (dMRI) plays an important role in the analysis of human health and brain diseases. However, the annotation of WM tracts is time-consuming and needs experienced neuroanatomists. In this study, to explore tract segmentation in the challenging setting of minimal annotations, we propose a novel framework utilizing only one annotated subject (subject-level one-shot) for tract segmentation. Our method is constructed by proposed registration-based peak augmentation (RPA) and uncertainty-based refining (URe) modules. RPA module synthesizes pseudo subjects and their corresponding labels to improve the tract segmentation performance. The proposed URe module alleviates the negative influence of the low-confidence voxels on pseudo subjects. Experimental results show that our method outperforms other state-of-the-art methods by a large margin, and our proposed modules are effective. Overall, our method achieves accurate whole-brain tract segmentation with only one annotated subject. Our code is available at https://github.com/HaoXu0507/ISBI2023-One-Shot-WM-Tract-Segmentation.


Superficial White Matter Analysis: An Efficient Point-cloud-based Deep Learning Framework with Supervised Contrastive Learning for Consistent Tractography Parcellation across Populations and dMRI Acquisitions

arXiv.org Artificial Intelligence

Diffusion MRI tractography is an advanced imaging technique that enables in vivo mapping of the brain's white matter connections. White matter parcellation classifies tractography streamlines into clusters or anatomically meaningful tracts. It enables quantification and visualization of whole-brain tractography. Currently, most parcellation methods focus on the deep white matter (DWM), whereas fewer methods address the superficial white matter (SWM) due to its complexity. We propose a novel two-stage deep-learning-based framework, Superficial White Matter Analysis (SupWMA), that performs an efficient and consistent parcellation of 198 SWM clusters from whole-brain tractography. A point-cloud-based network is adapted to our SWM parcellation task, and supervised contrastive learning enables more discriminative representations between plausible streamlines and outliers for SWM. We train our model on a large-scale tractography dataset including streamline samples from labeled long- and medium-range (over 40 mm) SWM clusters and anatomically implausible streamline samples, and we perform testing on six independently acquired datasets of different ages and health conditions (including neonates and patients with space-occupying brain tumors). Compared to several state-of-the-art methods, SupWMA obtains highly consistent and accurate SWM parcellation results on all datasets, showing good generalization across the lifespan in health and disease. In addition, the computational speed of SupWMA is much faster than other methods.