Xu, Min
PyTorch FSDP: Experiences on Scaling Fully Sharded Data Parallel
Zhao, Yanli, Gu, Andrew, Varma, Rohan, Luo, Liang, Huang, Chien-Chin, Xu, Min, Wright, Less, Shojanazeri, Hamid, Ott, Myle, Shleifer, Sam, Desmaison, Alban, Balioglu, Can, Damania, Pritam, Nguyen, Bernard, Chauhan, Geeta, Hao, Yuchen, Mathews, Ajit, Li, Shen
It is widely acknowledged that large models have the potential to deliver superior performance across a broad range of domains. Despite the remarkable progress made in the field of machine learning systems research, which has enabled the development and exploration of large models, such abilities remain confined to a small group of advanced users and industry leaders, resulting in an implicit technical barrier for the wider community to access and leverage these technologies. In this paper, we introduce PyTorch Fully Sharded Data Parallel (FSDP) as an industry-grade solution for large model training. FSDP has been closely co-designed with several key PyTorch core components including Tensor implementation, dispatcher system, and CUDA memory caching allocator, to provide non-intrusive user experiences and high training efficiency. Additionally, FSDP natively incorporates a range of techniques and settings to optimize resource utilization across a variety of hardware configurations. The experimental results demonstrate that FSDP is capable of achieving comparable performance to Distributed Data Parallel while providing support for significantly larger models with near-linear scalability in terms of TFLOPS.
Multimodal Hyperspectral Image Classification via Interconnected Fusion
Huo, Lu, Xia, Jiahao, Zhang, Leijie, Zhang, Haimin, Xu, Min
Existing multiple modality fusion methods, such as concatenation, summation, and encoder-decoder-based fusion, have recently been employed to combine modality characteristics of Hyperspectral Image (HSI) and Light Detection And Ranging (LiDAR). However, these methods consider the relationship of HSI-LiDAR signals from limited perspectives. More specifically, they overlook the contextual information across modalities of HSI and LiDAR and the intra-modality characteristics of LiDAR. In this paper, we provide a new insight into feature fusion to explore the relationships across HSI and LiDAR modalities comprehensively. An Interconnected Fusion (IF) framework is proposed. Firstly, the center patch of the HSI input is extracted and replicated to the size of the HSI input. Then, nine different perspectives in the fusion matrix are generated by calculating self-attention and cross-attention among the replicated center patch, HSI input, and corresponding LiDAR input. In this way, the intra- and inter-modality characteristics can be fully exploited, and contextual information is considered in both intra-modality and inter-modality manner. These nine interrelated elements in the fusion matrix can complement each other and eliminate biases, which can generate a multi-modality representation for classification accurately. Extensive experiments have been conducted on three widely used datasets: Trento, MUUFL, and Houston. The IF framework achieves state-of-the-art results on these datasets compared to existing approaches.
CoNIC Challenge: Pushing the Frontiers of Nuclear Detection, Segmentation, Classification and Counting
Graham, Simon, Vu, Quoc Dang, Jahanifar, Mostafa, Weigert, Martin, Schmidt, Uwe, Zhang, Wenhua, Zhang, Jun, Yang, Sen, Xiang, Jinxi, Wang, Xiyue, Rumberger, Josef Lorenz, Baumann, Elias, Hirsch, Peter, Liu, Lihao, Hong, Chenyang, Aviles-Rivero, Angelica I., Jain, Ayushi, Ahn, Heeyoung, Hong, Yiyu, Azzuni, Hussam, Xu, Min, Yaqub, Mohammad, Blache, Marie-Claire, Piรฉgu, Benoรฎt, Vernay, Bertrand, Scherr, Tim, Bรถhland, Moritz, Lรถffler, Katharina, Li, Jiachen, Ying, Weiqin, Wang, Chixin, Kainmueller, Dagmar, Schรถnlieb, Carola-Bibiane, Liu, Shuolin, Talsania, Dhairya, Meda, Yughender, Mishra, Prakash, Ridzuan, Muhammad, Neumann, Oliver, Schilling, Marcel P., Reischl, Markus, Mikut, Ralf, Huang, Banban, Chien, Hsiang-Chin, Wang, Ching-Ping, Lee, Chia-Yen, Lin, Hong-Kun, Liu, Zaiyi, Pan, Xipeng, Han, Chu, Cheng, Jijun, Dawood, Muhammad, Deshpande, Srijay, Bashir, Raja Muhammad Saad, Shephard, Adam, Costa, Pedro, Nunes, Joรฃo D., Campilho, Aurรฉlio, Cardoso, Jaime S., S, Hrishikesh P, Puthussery, Densen, G, Devika R, C, Jiji V, Zhang, Ye, Fang, Zijie, Lin, Zhifan, Zhang, Yongbing, Lin, Chunhui, Zhang, Liukun, Mao, Lijian, Wu, Min, Vo, Vi Thi-Tuong, Kim, Soo-Hyung, Lee, Taebum, Kondo, Satoshi, Kasai, Satoshi, Dumbhare, Pranay, Phuse, Vedant, Dubey, Yash, Jamthikar, Ankush, Vuong, Trinh Thi Le, Kwak, Jin Tae, Ziaei, Dorsa, Jung, Hyun, Miao, Tianyi, Snead, David, Raza, Shan E Ahmed, Minhas, Fayyaz, Rajpoot, Nasir M.
Nuclear detection, segmentation and morphometric profiling are essential in helping us further understand the relationship between histology and patient outcome. To drive innovation in this area, we setup a community-wide challenge using the largest available dataset of its kind to assess nuclear segmentation and cellular composition. Our challenge, named CoNIC, stimulated the development of reproducible algorithms for cellular recognition with real-time result inspection on public leaderboards. We conducted an extensive post-challenge analysis based on the top-performing models using 1,658 whole-slide images of colon tissue. With around 700 million detected nuclei per model, associated features were used for dysplasia grading and survival analysis, where we demonstrated that the challenge's improvement over the previous state-of-the-art led to significant boosts in downstream performance. Our findings also suggest that eosinophils and neutrophils play an important role in the tumour microevironment. We release challenge models and WSI-level results to foster the development of further methods for biomarker discovery.
BenchDirect: A Directed Language Model for Compiler Benchmarks
Tsimpourlas, Foivos, Petoumenos, Pavlos, Xu, Min, Cummins, Chris, Hazelwood, Kim, Rajan, Ajitha, Leather, Hugh
The exponential increase of hardware-software complexity has made it impossible for compiler engineers to find the right optimization heuristics manually. Predictive models have been shown to find near optimal heuristics with little human effort but they are limited by a severe lack of diverse benchmarks to train on. Generative AI has been used by researchers to synthesize benchmarks into existing datasets. However, the synthetic programs are short, exceedingly simple and lacking diversity in their features. We develop BenchPress, the first ML compiler benchmark generator that can be directed within source code feature representations. BenchPress synthesizes executable functions by infilling code that conditions on the program's left and right context. BenchPress uses active learning to introduce new benchmarks with unseen features into the dataset of Grewe's et al. CPU vs GPU heuristic, improving its acquired performance by 50%. BenchPress targets features that has been impossible for other synthesizers to reach. In 3 feature spaces, we outperform human-written code from GitHub, CLgen, CLSmith and the SRCIROR mutator in targeting the features of Rodinia benchmarks. BenchPress steers generation with beam search over a feature-agnostic language model. We improve this with BenchDirect which utilizes a directed LM that infills programs by jointly observing source code context and the compiler features that are targeted. BenchDirect achieves up to 36% better accuracy in targeting the features of Rodinia benchmarks, it is 1.8x more likely to give an exact match and it speeds up execution time by up to 72% compared to BenchPress. Both our models produce code that is difficult to distinguish from human-written code. We conduct a Turing test which shows our models' synthetic benchmarks are labelled as 'human-written' as often as human-written code from GitHub.
Dataset Pruning: Reducing Training Data by Examining Generalization Influence
Yang, Shuo, Xie, Zeke, Peng, Hanyu, Xu, Min, Sun, Mingming, Li, Ping
The great success of deep learning heavily relies on increasingly larger training data, which comes at a price of huge computational and infrastructural costs. This poses crucial questions that, do all training data contribute to model's performance? How much does each individual training sample or a sub-training-set affect the model's generalization, and how to construct the smallest subset from the entire training data as a proxy training set without significantly sacrificing the model's performance? To answer these, we propose dataset pruning, an optimization-based sample selection method that can (1) examine the influence of removing a particular set of training samples on model's generalization ability with theoretical guarantee, and (2) construct the smallest subset of training data that yields strictly constrained generalization gap. The empirically observed generalization gap of dataset pruning is substantially consistent with our theoretical expectations. Furthermore, the proposed method prunes 40% training examples on the CIFAR-10 dataset, halves the convergence time with only 1.3% test accuracy decrease, which is superior to previous score-based sample selection methods.
Objects in Semantic Topology
Yang, Shuo, Sun, Peize, Jiang, Yi, Xia, Xiaobo, Zhang, Ruiheng, Yuan, Zehuan, Wang, Changhu, Luo, Ping, Xu, Min
A more realistic object detection paradigm, Open-World Object Detection, has arisen increasing research interests in the community recently. A qualified openworld object detector can not only identify objects of known categories, but also discover unknown objects, and incrementally learn to categorize them when their annotations progressively arrive. Previous works rely on independent modules to recognize unknown categories and perform incremental learning, respectively. In this paper, we provide a unified perspective: Semantic Topology. During the life-long learning of an open-world object detector, all object instances from the same category are assigned to their corresponding pre-defined node in the semantic topology, including the'unknown' category. This constraint builds up discriminative feature representations and consistent relationships among objects, thus enabling the detector to distinguish unknown objects out of the known categories, as well as making learned features of known objects undistorted when learning new categories incrementally. Extensive experiments demonstrate that semantic topology, either randomly-generated or derived from a well-trained language model, could outperform the current state-of-the-art open-world object detectors by a large margin, e.g., the absolute open-set error is reduced from 7832 to 2546, exhibiting the inherent superiority of semantic topology on open-world object detection. Object detection, which aims at localizing and classifying objects in a given scene (Felzenszwalb et al., 2010; Everingham et al., 2010; Lin et al., 2014), is one of the most iconic abilities of biological intelligence. It was introduced to the artificial intelligence field to endow an intelligence agent with the ability of scene understanding.
Self-supervised Pretraining of Visual Features in the Wild
Goyal, Priya, Caron, Mathilde, Lefaudeux, Benjamin, Xu, Min, Wang, Pengchao, Pai, Vivek, Singh, Mannat, Liptchinsky, Vitaliy, Misra, Ishan, Joulin, Armand, Bojanowski, Piotr
Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods. These results have been achieved in a control environment, that is the highly curated ImageNet dataset. However, the premise of self-supervised learning is that it can learn from any random image and from any unbounded dataset. In this work, we explore if self-supervision lives to its expectation by training large models on random, uncurated images with no supervision. Our final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self-supervised learning works in a real world setting. Interestingly, we also observe that self-supervised models are good few-shot learners achieving 77.9% top-1 with access to only 10% of ImageNet. Code: https://github.com/facebookresearch/vissl
Experimental Analysis of Legendre Decomposition in Machine Learning
Pang, Jianye, Yi, Kai, Yin, Wanguang, Xu, Min
Matrix and tensor decomposition is the multiplication of a number of smaller matrices or tensors that are approximately disassembled by matrix and tensor. Up to now, the main matrix decomposition techniques have been widely used in computer vision, recommendation system, signal processing and other fields. Currently, standard methods for thirdorder nonnegative tensor decomposition include CP decomposition[1] and Tucker decomposition[2]. It's well known the normal nonnegative Tucker and CP tensor decomposition include non-convex optimization and that the global convergence is not guaranteed. One direction is to apply additional assumptions on data, such as a bounded variance, to transform the non-convex optimization problem into a convex one[3, 4]. Legendre decomposition[5] is a new nonnegative tensor decomposition method proposed by Mahito Sugiyama et al. Compared with the existing nonnegative tensor decomposition methods, the greatest contribution of Legendre decomposition lies in the transformation of the non-convex optimization problem onto a convex submanifold space without additional assumptions, which ensures global convergence, and the use of gradient descent can find a unique reconstructed tensor satisfying and the minimum Kullback-Leibler (KL) divergence from the input matrix. In this paper, we analyze Legendre tensor decomposition in both theory and application. From the perspective of theory, we aim to analyze the properties of dual parameters and dually flat manifold introduced in Legendre tensor decomposition.
Multi-task Learning for Macromolecule Classification, Segmentation and Coarse Structural Recovery in Cryo-Tomography
Liu, Chang, Zeng, Xiangrui, Wang, Kaiwen, Guo, Qiang, Xu, Min
Cellular Electron Cryo-Tomography (CECT) is a powerful 3D imaging tool for studying the native structure and organization of macromolecules inside single cells. For systematic recognition and recovery of macromolecular structures captured by CECT, methods for several important tasks such as subtomogram classification and semantic segmentation have been developed. However, the recognition and recovery of macromolecular structures are still very difficult due to high molecular structural diversity, crowding molecular environment, and the imaging limitations of CECT. In this paper, we propose a novel multi-task 3D convolutional neural network model for simultaneous classification, segmentation, and coarse structural recovery of macromolecules of interest in subtomograms. In our model, the learned image features of one task are shared and thereby mutually reinforce the learning of other tasks. Evaluated on realistically simulated and experimental CECT data, our multi-task learning model outperformed all single-task learning methods for classification and segmentation. In addition, we demonstrate that our model can generalize to discover, segment and recover novel structures that do not exist in the training data.
Image-derived generative modeling of pseudo-macromolecular structures - towards the statistical assessment of Electron CryoTomography template matching
Wang, Kai Wen, Zeng, Xiangrui, Liang, Xiaodan, Huo, Zhiguang, Xing, Eric P., Xu, Min
Cellular Electron CryoTomography (CECT) is a 3D imaging technique that captures information about the structure and spatial organization of macromolecular complexes within single cells, in near-native state and at sub-molecular resolution. Although template matching is often used to locate macromolecules in a CECT image, it is insufficient as it only measures the relative structural similarity. Therefore, it is preferable to assess the statistical credibility of the decision through hypothesis testing, requiring many templates derived from a diverse population of macromolecular structures. Due to the very limited number of known structures, we need a generative model to efficiently and reliably sample pseudo-structures from the complex distribution of macromolecular structures. To address this challenge, we propose a novel image-derived approach for performing hypothesis testing for template matching by constructing generative models using the generative adversarial network. Finally, we conducted hypothesis testing experiments for template matching on both simulated and experimental subtomograms, allowing us to conclude the identity of subtomograms with high statistical credibility and significantly reducing false positives.