Plotting

 Cao, Zhuo


MRI Reconstruction with Regularized 3D Diffusion Model (R3DM)

arXiv.org Artificial Intelligence

In order to speed up the acquisition time, MRI instruments acquire sub-sampled k-space data, a technique where only a fraction of the total k-space data points are sampled during the imaging process. Several attempts have been proposed to develop two-dimensional (2D) and three-dimensional (3D) image reconstruction techniques for sub-sampled k-space, as discussed in [11, 13, 31]. Advancements in 3D MR imaging methods can address the challenges posed by complex anatomical structures of human organs and plant growths. Consequently, the demand for developing 3D MR image reconstruction methods has intensified. Currently, most works reconstruct a 3D volumetric image by stacking 2D reconstructions because MR images are acquired slice by slice. This method doesn't consider the inter-dependency between the slices, thus can lead to inconsistencies and artifacts, as discussed in [4, 8, 50]. This particularly affects datasets that have equally distributed information and structures with high continuity on all dimensions, such as roots and vessels [4, 38, 50]. Before the deep learning-based models, which learn the data-driven prior, the model-based iterative reconstruction method proved its effectiveness in the 3D MRI reconstruction problem [15, 54]. The problem is formulated as an optimization problem where a data consistency term is applied to ensure fidelity, and a regularisation term, such as the Total Variation (TV) penalty [24] is utilized to provide general prior knowledge of MRI data.


FlashVTG: Feature Layering and Adaptive Score Handling Network for Video Temporal Grounding

arXiv.org Artificial Intelligence

Text-guided Video Temporal Grounding (VTG) aims to localize relevant segments in untrimmed videos based on textual descriptions, encompassing two subtasks: Moment Retrieval (MR) and Highlight Detection (HD). Although previous typical methods have achieved commendable results, it is still challenging to retrieve short video moments. This is primarily due to the reliance on sparse and limited decoder queries, which significantly constrain the accuracy of predictions. Furthermore, suboptimal outcomes often arise because previous methods rank predictions based on isolated predictions, neglecting the broader video context. To tackle these issues, we introduce FlashVTG, a framework featuring a Temporal Feature Layering (TFL) module and an Adaptive Score Refinement (ASR) module. The TFL module replaces the traditional decoder structure to capture nuanced video content variations across multiple temporal scales, while the ASR module improves prediction ranking by integrating context from adjacent moments and multi-temporal-scale features. Extensive experiments demonstrate that FlashVTG achieves state-of-the-art performance on four widely adopted datasets in both MR and HD. Specifically, on the QVHighlights dataset, it boosts mAP by 5.8% for MR and 3.3% for HD. For short-moment retrieval, FlashVTG increases mAP to 125% of previous SOTA performance. All these improvements are made without adding training burdens, underscoring its effectiveness. Our code is available at https://github.com/Zhuo-Cao/FlashVTG.