Goto

Collaborating Authors

 Fan, Shiyu


Flat'n'Fold: A Diverse Multi-Modal Dataset for Garment Perception and Manipulation

arXiv.org Artificial Intelligence

Abstract-- We present Flat'n'Fold, a novel large-scale dataset for garment manipulation that addresses critical gaps in existing datasets. We quantify the dataset's diversity and complexity compared to existing benchmarks and show that our dataset features natural and diverse manipulations of real-world demonstrations of human and robot demonstrations in terms of visual and action information. To showcase Flat'n'Fold's utility, we establish new benchmarks for grasping point prediction and This underscores Flat'n'Fold's potential to drive advances in robotic perception and manipulation of deformable objects. Human-controlled Robot Demonstrations, where an expert Manipulating garments remains a significant challenge in human operator controls a robot to execute similar robotics. Tasks such as flattening and folding require understanding garment manipulation tasks, aiming to replicate natural, the vast space of configurations that garments can human-like approaches within the robot's operational adopt [1], [2], and planning complex sequences of actions limitations.


Missing-modality Enabled Multi-modal Fusion Architecture for Medical Data

arXiv.org Artificial Intelligence

Fusing multi-modal data can improve the performance of deep learning models. However, missing modalities are common for medical data due to patients' specificity, which is detrimental to the performance of multi-modal models in applications. Therefore, it is critical to adapt the models to missing modalities. This study aimed to develop an efficient multi-modal fusion architecture for medical data that was robust to missing modalities and further improved the performance on disease diagnosis.X-ray chest radiographs for the image modality, radiology reports for the text modality, and structured value data for the tabular data modality were fused in this study. Each modality pair was fused with a Transformer-based bi-modal fusion module, and the three bi-modal fusion modules were then combined into a tri-modal fusion framework. Additionally, multivariate loss functions were introduced into the training process to improve model's robustness to missing modalities in the inference process. Finally, we designed comparison and ablation experiments for validating the effectiveness of the fusion, the robustness to missing modalities and the enhancements from each key component. Experiments were conducted on MIMIC-IV, MIMIC-CXR with the 14-label disease diagnosis task. Areas under the receiver operating characteristic curve (AUROC), the area under the precision-recall curve (AUPRC) were used to evaluate models' performance. The experimental results demonstrated that our proposed multi-modal fusion architecture effectively fused three modalities and showed strong robustness to missing modalities. This method is hopeful to be scaled to more modalities to enhance the clinical practicality of the model.