Developing Responsible Chatbots for Financial Services: A Pattern-Oriented Responsible AI Engineering Approach

Lu, Qinghua, Luo, Yuxiu, Zhu, Liming, Tang, Mingjian, Xu, Xiwei, Whittle, Jon

arXiv.org Artificial Intelligence 

ChatGPT has gained huge attention and discussion worldwide, with responsible AI being a crucial topic of discussion. One key question is how we can ensure that AI systems, like ChatGPT, are developed and adopted in a responsible way? Responsible AI is the practice of developing, deploying, and maintaining AI systems in a way that benefits the humans, society, and environment, while minimising the risk of negative consequences. To solve the challenge of responsible AI, many AI ethics principles have been released recently by governments, organisations, and enterprises [1]. A principle-based approach provides technology-neutral and context-independent guidance while allowing contextspecific interpretations for implementing responsible AI. However, those principles are too abstract and high-level for practitioners to use in practice. For example, it is a very challenging and complex task to operationalise the the human-centered value principle regarding how it can be designed for, implemented and monitored throughout the entire lifecycle of AI systems. In addition, the existing work mainly focuses on algorithm-level solutions for a subset of mathematics-amenable AI ethics principles (such as privacy and fairness). However, responsible AI issues can happen at any stage of the development lifecycle crosscutting various AI and non-AI components of systems beyond AI algorithms and models.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found