HH4AI: A methodological Framework for AI Human Rights impact assessment under the EUAI ACT

Ceravolo, Paolo, Damiani, Ernesto, D'Amico, Maria Elisa, Erb, Bianca de Teffe, Favaro, Simone, Fiano, Nannerel, Gambatesa, Paolo, La Porta, Simone, Maghool, Samira, Mauri, Lara, Panigada, Niccolo, Vaquer, Lorenzo Maria Ratto, Tamborini, Marta A.

arXiv.org Artificial Intelligence 

This paper introduces the HH4AI Methodology, a structured approach to assessing the impact of AI systems on human rights, focusing on compliance with the EU AI Act and addressing technical, ethical, and regulatory challenges. The paper highlights AIs transformative nature, driven by autonomy, data, and goal-oriented design, and how the EU AI Act promotes transparency, accountability, and safety. A key challenge is defining and assessing "high-risk" AI systems across industries, complicated by the lack of universally accepted standards and AIs rapid evolution. To address these challenges, the paper explores the relevance of ISO/IEC and IEEE standards, focusing on risk management, data quality, bias mitigation, and governance. It proposes a Fundamental Rights Impact Assessment (FRIA) methodology, a gate-based framework designed to isolate and assess risks through phases including an AI system overview, a human rights checklist, an impact assessment, and a final output phase. A filtering mechanism tailors the assessment to the system's characteristics, targeting areas like accountability, AI literacy, data governance, and transparency. The paper illustrates the FRIA methodology through a fictional case study of an automated healthcare triage service. The structured approach enables systematic filtering, comprehensive risk assessment, and mitigation planning, effectively prioritizing critical risks and providing clear remediation strategies. This promotes better alignment with human rights principles and enhances regulatory compliance.