Looking Forward: Challenges and Opportunities in Agentic AI Reliability

Xing, Liudong, Janet, null, Lin, null

arXiv.org Artificial Intelligence 

The AI conversation can be traced as far back as Alan Turing's milestone paper published in 1950, which considered the fundamental question "Can machines think?" [1]. In 1956, AI got its name and mission as a scientific field at the first AI conference held at Dartmouth College [2]. Following AI's foundational period in the 1950s ~ 1970s, AI has evolved from early rule-based systems (1970s ~ 1990s), through classical machine learning and deep learning with neural networks (1990s ~ 2020s), to today's generative and agentic AI systems (since 2010s). Correspondingly, as a vital requirement of these systems, the reliability concept and concerns are also evolving, particularly in the interpretation of "required function" (see Table 1 in Chapter 10), based on the definition in standards like ISO 8402 "The ability of an item to perform a required function, under given environmental and operational conditions and for a stated period of time ". While a conventional AI system is concerned with providing stable and accurate classifications, predictions, or optimizations, a reliable generative AI system focuses on producing outputs that are trustworthy, consistent, safe, and contextually appropriate [3]. Building on both, a reliable agentic AI system should additionally conduct functions of reasoning, goal alignment, planning, safe adaption and interaction in dynamic and collaborative multi-agent contexts. The expansion of reliability concepts has introduced new challenges and research opportunities, as exemplified in Figure 1. In the following sections, we shed lights on these challenges and opportunities in building reliable AI systems, particularly, agentic AI systems.