perceptual error
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.68)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Massachusetts (0.04)
Risk-Driven Design of Perception Systems
Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions. We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
- Health & Medicine > Therapeutic Area > Ophthalmology/Optometry (1.00)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.68)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Massachusetts (0.04)
MAARTA:Multi-Agentic Adaptive Radiology Teaching Assistant
Awasthi, Akash, Chang, Brandon V., Vu, Anh M., Le, Ngan, Agrawal, Rishi, Deng, Zhigang, Wu, Carol, Van Nguyen, Hien
Radiology students often struggle to develop perceptual expertise due to limited time for expert mentorship, leading to errors in visual search patterns and diagnostic interpretation. These perceptual errors--such as missed fixations, brief dwell times, or misinterpretations--are not adequately addressed by existing AI systems, which focus on diagnostic accuracy but fail to explain how and why errors occur. To bridge this gap, we propose MAARTA (Multi-Agentic Adaptive Radiology Teaching Assistant), a multi-agent framework that analyzes gaze patterns and radiology reports to provide personalized feedback. Unlike single-agent models, MAARTA dynamically recruits agents based on error complexity, ensuring adaptive and efficient reasoning. By leveraging thought graphs to compare expert and student gaze behavior, the system identifies missed findings and assigns Perceptual Error Teacher (PET) agents to analyze discrepancies. Using Chain-of-Thought (CoT) prompting, MAARTA generates meaningful insights, helping students understand their errors and refine their diagnostic reasoning, ultimately enhancing AI-driven radiology education. An anonymous code and dataset link is provided in the supplementary material.
- North America > United States > Texas > Harris County > Houston (0.14)
- North America > United States > Arkansas > Washington County > Fayetteville (0.14)
- Instructional Material (0.93)
- Research Report (0.64)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Beyond the First Read: AI-Assisted Perceptual Error Detection in Chest Radiography Accounting for Interobserver Variability
Vutukuri, Adhrith, Awasthi, Akash, Yang, David, Wu, Carol C., Van Nguyen, Hien
Chest radiography is widely used in diagnostic imaging. However, perceptual errors -- especially overlooked but visible abnormalities -- remain common and clinically significant. Current workflows and AI systems provide limited support for detecting such errors after interpretation and often lack meaningful human--AI collaboration. We introduce RADAR (Radiologist--AI Diagnostic Assistance and Review), a post-interpretation companion system. RADAR ingests finalized radiologist annotations and CXR images, then performs regional-level analysis to detect and refer potentially missed abnormal regions. The system supports a "second-look" workflow and offers suggested regions of interest (ROIs) rather than fixed labels to accommodate inter-observer variation. We evaluated RADAR on a simulated perceptual-error dataset derived from de-identified CXR cases, using F1 score and Intersection over Union (IoU) as primary metrics. RADAR achieved a recall of 0.78, precision of 0.44, and an F1 score of 0.56 in detecting missed abnormalities in the simulated perceptual-error dataset. Although precision is moderate, this reduces over-reliance on AI by encouraging radiologist oversight in human--AI collaboration. The median IoU was 0.78, with more than 90% of referrals exceeding 0.5 IoU, indicating accurate regional localization. RADAR effectively complements radiologist judgment, providing valuable post-read support for perceptual-error detection in CXR interpretation. Its flexible ROI suggestions and non-intrusive integration position it as a promising tool in real-world radiology workflows. To facilitate reproducibility and further evaluation, we release a fully open-source web implementation alongside a simulated error dataset. All code, data, demonstration videos, and the application are publicly available at https://github.com/avutukuri01/RADAR.
- North America > United States > Texas > Harris County > Houston (0.14)
- Europe > Belgium > Flanders (0.04)
- Asia > Vietnam > Hanoi > Hanoi (0.04)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
NavG: Risk-Aware Navigation in Crowded Environments Based on Reinforcement Learning with Guidance Points
Zhang, Qianyi, Luo, Wentao, Liu, Boyi, Zhang, Ziyang, Wang, Yaoyuan, Liu, Jingtai
-- Motion planning in navigation systems is highly susceptible to upstream perceptual errors, particularly in human detection and tracking. T o mitigate this issue, the concept of guidance points--a novel directional cue within a reinforcement learning-based framework--is introduced. A structured method for identifying guidance points is developed, consisting of obstacle boundary extraction, potential guidance point detection, and redundancy elimination. T o integrate guidance points into the navigation pipeline, a perception-to-planning mapping strategy is proposed, unifying guidance points with other perceptual inputs and enabling the RL agent to effectively leverage the complementary relationships among raw laser data, human detection and tracking, and guidance points. Qualitative and quantitative simulations demonstrate that the proposed approach achieves the highest success rate and near-optimal travel times, greatly improving both safety and efficiency. Furthermore, real-world experiments in dynamic corridors and lobbies validate the robot's ability to confidently navigate around obstacles and robustly avoid pedestrians. With the continuous advancement of robotic technologies, a widely accepted navigation framework has emerged, encompassing perception, planning, control, and localization [1], [2]. As a downstream component, the planning module processes outputs from the perception module, such as segmented objects and detected pedestrians. In particular, inaccuracies in human detection and tracking--including misestimating a pedestrian's velocity, failing to detect a pedestrian, or misclassifying a non-pedestrian as a pedestrian, as illustrated in Fig.1--can significantly compromise navigation safety and efficiency.
Risk-Driven Design of Perception Systems
Modern autonomous systems rely on perception modules to process complex sensor measurements into state estimates. These estimates are then passed to a controller, which uses them to make safety-critical decisions. It is therefore important that we design perception systems to minimize errors that reduce the overall safety of the system. We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system. We formulate a risk function to quantify the effect of a given perceptual error on overall safety, and show how we can use it to design safer perception systems by including a risk-dependent term in the loss function and generating training data in risk-sensitive regions.
Enhancing Radiological Diagnosis: A Collaborative Approach Integrating AI and Human Expertise for Visual Miss Correction
Awasthi, Akash, Le, Ngan, Deng, Zhigang, Wu, Carol C., Van Nguyen, Hien
Human-AI collaboration to identify and correct perceptual errors in chest radiographs has not been previously explored. This study aimed to develop a collaborative AI system, CoRaX, which integrates eye gaze data and radiology reports to enhance diagnostic accuracy in chest radiology by pinpointing perceptual errors and refining the decision-making process. Using public datasets REFLACX and EGD-CXR, the study retrospectively developed CoRaX, employing a large multimodal model to analyze image embeddings, eye gaze data, and radiology reports. The system's effectiveness was evaluated based on its referral-making process, the quality of referrals, and performance in collaborative diagnostic settings. CoRaX was tested on a simulated error dataset of 271 samples with 28% (93 of 332) missed abnormalities. The system corrected 21% (71 of 332) of these errors, leaving 7% (22 of 312) unresolved. The Referral-Usefulness score, indicating the accuracy of predicted regions for all true referrals, was 0.63 (95% CI 0.59, 0.68). The Total-Usefulness score, reflecting the diagnostic accuracy of CoRaX's interactions with radiologists, showed that 84% (237 of 280) of these interactions had a score above 0.40. In conclusion, CoRaX efficiently collaborates with radiologists to address perceptual errors across various abnormalities, with potential applications in the education and training of novice radiologists.
- North America > United States > Texas > Harris County > Houston (0.14)
- North America > United States > Arkansas (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)