InSight-R: A Framework for Risk-informed Human Failure Event Identification and Interface-Induced Risk Assessment Driven by AutoGraph
Xiao, Xingyu, Tong, Jiejuan, Chen, Peng, Sun, Jun, Sui, Zhe, Liang, Jingang, Zhao, Hongru, Zhao, Jun, Wang, Haitao
–arXiv.org Artificial Intelligence
Human reliability remains a critical concern in safety-critical domains such as nuclear power, where operational failures are often linked to human error. While conventional human reliability analysis (HRA) methods have been widely adopted, they rely heavily on expert judgment for identifying human failure events (HFEs) and assigning performance influencing factors (PIFs). This reliance introduces challenges related to reproducibility, subjectivity, and limited integration of interface-level data. In particular, current approaches lack the capacity to rigorously assess how human-machine interface design contributes to operator performance variability and error susceptibility. To address these limitations, this study proposes a framework for risk-informed human failure event identification and interface-induced risk assessment driven by AutoGraph (InSight-R). By linking empirical behavioral data to the interface-embedded knowledge graph (IE-KG) constructed by the automated graph-based execution framework (Auto-Graph), the InSight-R framework enables automated HFE identification based on both error-prone and time-deviated operational paths. Furthermore, we discuss the relationship between designer-user conflicts and human error. This framework offers actionable insights for interface design optimization and contributes to the advancement of mechanism-driven HRA methodologies. Keywords: Knowledge-Graph-Driven, Automated, Interface-Induced Risk, Human Error Identification 1 Introduction Human error remains a leading contributor to failures in complex socio-technical systems such as nuclear power plants, aviation, and healthcare, where safety-critical operations depend on accurate and timely human decisions [1, 2]. Human reliability analysis (HRA) methods have been widely used to model operator behavior and assess the likelihood of human failure events (HFEs) [3]. However, prevailing HRA approaches are often constrained by their reliance on expert judgment, particularly in the identification of HFEs and the assignment of performance influencing factors (PIFs) [3, 4]. In traditional HRA frameworks such as the integrated human event analysis system for event and condition assessment (IDHEAS-ECA), HFEs are primarily determined through expert elicitation, a process that, while practical, suffers from limited reproducibility, insufficient transparency, and weak theoretical grounding [5].
arXiv.org Artificial Intelligence
Jul-2-2025
- Country:
- Genre:
- Research Report > New Finding (1.00)
- Industry:
- Energy > Power Industry
- Health & Medicine (1.00)
- Technology: