Yang, Minghao
HiCMamba: Enhancing Hi-C Resolution and Identifying 3D Genome Structures with State Space Modeling
Yang, Minghao, Huang, Zhi-An, Zheng, Zhihang, Liu, Yuqiao, Zhang, Shichen, Zhang, Pengfei, Xiong, Hui, Tang, Shaojun
However, high sequencing costs and technical challenges often result in Hi-C data with limited coverage, leading to imprecise estimates of chromatin interaction frequencies. To address this issue, we present a novel deep learning-based method HiCMamba to enhance the resolution of Hi-C contact maps using a state space model. We adopt the UNet-based auto-encoder architecture to stack the proposed holistic scan block, enabling the perception of both global and local receptive fields at multiple scales. Experimental results demonstrate that HiCMamba outperforms state-of-the-art methods while significantly reducing computational resources. Furthermore, the 3D genome structures, including topologically associating domains (TADs) and loops, identified in the contact maps recovered by HiCMamba are validated through associated epigenomic features. Our work demonstrates the potential of a state space model as foundational frameworks in the field of Hi-C resolution enhancement.
AURORA:Automated Training Framework of Universal Process Reward Models via Ensemble Prompting and Reverse Verification
Tan, Xiaoyu, Yao, Tianchu, Qu, Chao, Li, Bin, Yang, Minghao, Lu, Dakuan, Wang, Haozhe, Qiu, Xihe, Chu, Wei, Xu, Yinghui, Qi, Yuan
The reasoning capabilities of advanced large language models (LLMs) like o1 have revolutionized artificial intelligence applications. Nevertheless, evaluating and optimizing complex reasoning processes remain significant challenges due to diverse policy distributions and the inherent limitations of human effort and accuracy. In this paper, we present AURORA, a novel automated framework for training universal process reward models (PRMs) using ensemble prompting and reverse verification. The framework employs a two-phase approach: First, it uses diverse prompting strategies and ensemble methods to perform automated annotation and evaluation of processes, ensuring robust assessments for reward learning. Second, it leverages practical reference answers for reverse verification, enhancing the model's ability to validate outputs and improving training accuracy. To assess the framework's performance, we extend beyond the existing ProcessBench benchmark by introducing UniversalBench, which evaluates reward predictions across full trajectories under diverse policy distribtion with long Chain-of-Thought (CoT) outputs. Experimental results demonstrate that AURORA enhances process evaluation accuracy, improves PRMs' accuracy for diverse policy distributions and long-CoT responses. The project will be open-sourced at https://auroraprm.github.io/. The Universal-PRM-7B is available at https://huggingface.co/infly/Universal-PRM-7B.
Biology Instructions: A Dataset and Benchmark for Multi-Omics Sequence Understanding Capability of Large Language Models
He, Haonan, Ren, Yuchen, Tang, Yining, Xu, Ziyang, Li, Junxian, Yang, Minghao, Zhang, Di, Yuan, Dong, Chen, Tao, Zhang, Shufei, Li, Yuqiang, Dong, Nanqing, Ouyang, Wanli, Zhou, Dongzhan, Ye, Peng
Large language models have already demonstrated their formidable capabilities in general domains, ushering in a revolutionary transformation. However, exploring and exploiting the extensive knowledge of these models to comprehend multi-omics biology remains underexplored. To fill this research gap, we first introduce Biology-Instructions, the first large-scale multi-omics biological sequences-related instruction-tuning dataset including DNA, RNA, proteins, and multi-molecules, designed to bridge the gap between large language models (LLMs) and complex biological sequences-related tasks. This dataset can enhance the versatility of LLMs by integrating diverse biological sequenced-based prediction tasks with advanced reasoning capabilities, while maintaining conversational fluency. Additionally, we reveal significant performance limitations in even state-of-the-art LLMs on biological sequence-related multi-omics tasks without specialized pre-training and instruction-tuning. We further develop a strong baseline called ChatMultiOmics with a novel three-stage training pipeline, demonstrating the powerful ability to understand biology by using Biology-Instructions. Biology-Instructions and ChatMultiOmics are publicly available and crucial resources for enabling more effective integration of LLMs with multi-omics sequence analysis.
Detect an Object At Once without Fine-tuning
Hao, Junyu, Liu, Jianheng, Zhao, Yongjia, Chen, Zuofan, Sun, Qi, Chen, Jinlong, Wei, Jianguo, Yang, Minghao
When presented with one or a few photos of a previously unseen object, humans can instantly recognize it in different scenes. Although the human brain mechanism behind this phenomenon is still not fully understood, this work introduces a novel technical realization of this task. It consists of two phases: (1) generating a Similarity Density Map (SDM) by convolving the scene image with the given object image patch(es) so that the highlight areas in the SDM indicate the possible locations; (2) obtaining the object occupied areas in the scene through a Region Alignment Network (RAN). The RAN is constructed on a backbone of Deep Siamese Network (DSN), and different from the traditional DSNs, it aims to obtain the object accurate regions by regressing the location and area differences between the ground truths and the predicted ones indicated by the highlight areas in SDM. By pre-learning from labels annotated in traditional datasets, the SDM-RAN can detect previously unknown objects without fine-tuning. Experiments were conducted on the MS COCO, PASCAL VOC datasets. The results indicate that the proposed method outperforms state-of-the-art methods on the same task.
Rene: A Pre-trained Multi-modal Architecture for Auscultation of Respiratory Diseases
Zhang, Pengfei, Zheng, Zhihang, Zhang, Shichen, Yang, Minghao, Tang, Shaojun
Compared with invasive examinations that require tissue sampling, respiratory sound testing is a non-invasive examination method that is safer and easier for patients to accept. In this study, we introduce Rene, a pioneering large-scale model tailored for respiratory sound recognition. Rene has been rigorously fine-tuned with an extensive dataset featuring a broad array of respiratory audio samples, targeting disease detection, sound pattern classification, and event identification. Our innovative approach applies a pre-trained speech recognition model to process respiratory sounds, augmented with patient medical records. The resulting multi-modal deep-learning framework addresses interpretability and real-time diagnostic challenges that have hindered previous respiratory-focused models. Benchmark comparisons reveal that Rene significantly outperforms existing models, achieving improvements of 10.27%, 16.15%, 15.29%, and 18.90% in respiratory event detection and audio classification on the SPRSound database. Disease prediction accuracy on the ICBHI database improved by 23% over the baseline in both mean average and harmonic scores. Moreover, we have developed a real-time respiratory sound discrimination system utilizing the Rene architecture. Employing state-of-the-art Edge AI technology, this system enables rapid and accurate responses for respiratory sound auscultation(https://github.com/zpforlove/Rene). KEYWORDS
Decay Pruning Method: Smooth Pruning With a Self-Rectifying Procedure
Yang, Minghao, Gao, Linlin, Li, Pengyuan, Li, Wenbo, Dong, Yihong, Cui, Zhiying
Deep Neural Networks (DNNs) have been widely used for various applications, such as image classification [22; 40], object segmentation [33; 35], and object detection [6; 43]. However, the increasing size and complexity of DNNs often result in substantial computational and memory requirements, posing challenges for deployment on resource-constrained platforms, such as mobile or embedded devices. Consequently, developing efficient methods to reduce the computational complexity and storage demands of large models, while minimizing performance degradation, has become essential. Network pruning is one of the most popular methods in model compression. Specifically, current network pruning methods are categorized into unstructured and structured pruning [5]. Unstructured pruning [11; 24] focuses on eliminating individual weights from a network to create fine-grained sparsity. Although these approaches achieve an excellent balance between model size reduction and accuracy retention, they often require specific hardware support for acceleration, which is impractical for general-purpose computing environments. Conversely, structured pruning [23; 18; 29] avoids these hardware dependencies by eliminating redundant network structures, thus introducing a more manageable and hardware-compatible form of sparsity. As a result, structured pruning has become popular and is extensively utilized.
BayesNAS: A Bayesian Approach for Neural Architecture Search
Zhou, Hongpeng, Yang, Minghao, Wang, Jun, Pan, Wei
One-Shot Neural Architecture Search (NAS) is a promising method to significantly reduce search time without any separate training. It can be treated as a Network Compression problem on the architecture parameters from an over-parameterized network. However, there are two issues associated with most one-shot NAS methods. First, dependencies between a node and its predecessors and successors are often disregarded which result in improper treatment over zero operations. Second, architecture parameters pruning based on their magnitude is questionable. In this paper, we employ the classic Bayesian learning approach to alleviate these two issues by modeling architecture parameters using hierarchical automatic relevance determination (HARD) priors. Unlike other NAS methods, we train the over-parameterized network for only one epoch then update the architecture. Impressively, this enabled us to find the architecture on CIFAR-10 within only 0.2 GPU days using a single GPU. Competitive performance can be also achieved by transferring to ImageNet. As a byproduct, our approach can be applied directly to compress convolutional neural networks by enforcing structural sparsity which achieves extremely sparse networks without accuracy deterioration.