Du, Shide
OpenViewer: Openness-Aware Multi-View Learning
Du, Shide, Fang, Zihan, Tan, Yanchao, Wang, Changwei, Wang, Shiping, Guo, Wenzhong
Multi-view learning methods leverage multiple data sources to enhance perception by mining correlations across views, typically relying on predefined categories. However, deploying these models in real-world scenarios presents two primary openness challenges. 1) Lack of Interpretability: The integration mechanisms of multi-view data in existing black-box models remain poorly explained; 2) Insufficient Generalization: Most models are not adapted to multi-view scenarios involving unknown categories. To address these challenges, we propose OpenViewer, an openness-aware multi-view learning framework with theoretical support. This framework begins with a Pseudo-Unknown Sample Generation Mechanism to efficiently simulate open multi-view environments and previously adapt to potential unknown samples. Subsequently, we introduce an Expression-Enhanced Deep Unfolding Network to intuitively promote interpretability by systematically constructing functional prior-mapping modules and effectively providing a more transparent integration mechanism for multi-view data. Additionally, we establish a Perception-Augmented Open-Set Training Regime to significantly enhance generalization by precisely boosting confidences for known categories and carefully suppressing inappropriate confidences for unknown ones. Experimental results demonstrate that OpenViewer effectively addresses openness challenges while ensuring recognition performance for both known and unknown samples. The code is released at https://github.com/dushide/OpenViewer.
Bridging Trustworthiness and Open-World Learning: An Exploratory Neural Approach for Enhancing Interpretability, Generalization, and Robustness
Du, Shide, Fang, Zihan, Lan, Shiyang, Tan, Yanchao, Günther, Manuel, Wang, Shiping, Guo, Wenzhong
As researchers strive to narrow the gap between machine intelligence Contemporary artificial intelligence (AI) continues to furnish benefits and human through the development of artificial intelligence to real-society from economic and environmental perspectives, technologies, it is imperative that we recognize the critical among others [12, 33]. As AI gradually penetrates into high-risk importance of trustworthiness in open-world, which has become fields such as healthcare, finance and medicine, which are closely ubiquitous in all aspects of daily life for everyone. However, several related to human attributes, there is growing consensus awareness challenges may create a crisis of trust in current artificial intelligence that people urgently expect these AI solutions to be trustworthy systems that need to be bridged: 1) Insufficient explanation of [8, 16]. For instance, lenders expect the system to provide credible predictive results; 2) Inadequate generalization for learning models; explanations for rejecting their applications; engineers wish to develop 3) Poor adaptability to uncertain environments. Consequently, we common system interfaces to adapt to wider environments; explore a neural program to bridge trustworthiness and open-world businesspeople desire that the system can still operate effectively learning, extending from single-modal to multi-modal scenarios under various complex conditions, among other expectations.