Control in Hybrid Chatbots

Rüdel, Thomas, Leidner, Jochen L.

arXiv.org Artificial Intelligence 

Chatbots and AI-agents have become widespread in customer service and in applications like knowledge management, recommender systems, and help desks. Businesses increasingly want to benefit from the capabilities of large language models like OpenAI's GPT-4 and applications powered by such models. Nevertheless, the use of generative AI by companies has been seriously slowed down by concerns about data protection and by the fact that generative AI is known to sometimes make things up - create "hallucinations" as it is often called. Even if an answer does not contain hallucinated information, it may still suffer from incompleteness or misleadingly connected pieces of information. However, companies that want to use AI-agents in non-trivial circumstances need to be able to control them, in particular in customer-facing applications. It would be very unfortunate if it misinforms customers about the company's products or prices. It should also stick very closely to the intended marketing messages. While there is a lot of discussion about "safe AI", "reliable AI", "trustworthy AI", "explainable AI" (XAI) etc., the question of "controllable AI" is rarely discussed. However, as stated above, it is very often crucial that enterprises cannot just rely on, but are in fact able to control an AI system (more precisely, exercise control at design time how the system will behave at runtime).