Bridging Trustworthiness and Open-World Learning: An Exploratory Neural Approach for Enhancing Interpretability, Generalization, and Robustness

Du, Shide, Fang, Zihan, Lan, Shiyang, Tan, Yanchao, Günther, Manuel, Wang, Shiping, Guo, Wenzhong

arXiv.org Machine Learning 

As researchers strive to narrow the gap between machine intelligence Contemporary artificial intelligence (AI) continues to furnish benefits and human through the development of artificial intelligence to real-society from economic and environmental perspectives, technologies, it is imperative that we recognize the critical among others [12, 33]. As AI gradually penetrates into high-risk importance of trustworthiness in open-world, which has become fields such as healthcare, finance and medicine, which are closely ubiquitous in all aspects of daily life for everyone. However, several related to human attributes, there is growing consensus awareness challenges may create a crisis of trust in current artificial intelligence that people urgently expect these AI solutions to be trustworthy systems that need to be bridged: 1) Insufficient explanation of [8, 16]. For instance, lenders expect the system to provide credible predictive results; 2) Inadequate generalization for learning models; explanations for rejecting their applications; engineers wish to develop 3) Poor adaptability to uncertain environments. Consequently, we common system interfaces to adapt to wider environments; explore a neural program to bridge trustworthiness and open-world businesspeople desire that the system can still operate effectively learning, extending from single-modal to multi-modal scenarios under various complex conditions, among other expectations.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found