Application of the NIST AI Risk Management Framework to Surveillance Technology

Swaminathan, Nandhini, Danks, David

arXiv.org Artificial Intelligence 

This study offers an in-depth analysis of the application and implications of the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) within the domain of surveillance technologies, particularly facial recognition technology. Given the inherently high-risk and consequential nature of facial recognition systems, our research emphasizes the critical need for a structured approach to risk management in this sector. The paper presents a detailed case study demonstrating the utility of the NIST AI RMF in identifying and mitigating risks that might otherwise remain unnoticed in these technologies. Our primary objective is to develop a comprehensive risk management strategy that advances the practice of responsible AI utilization in feasible, scalable ways. We propose a six-step process tailored to the specific challenges of surveillance technology that aims to produce a more systematic and effective risk management practice. This process emphasizes continual assessment and improvement to facilitate companies in managing AI-related risks more robustly and ensuring ethical and responsible deployment of AI systems. These insights contribute to the evolving discourse on AI governance and risk management, highlighting areas for future refinement and development in frameworks like the NIST AI RMF. Surveillance technologies are increasingly widespread in both public and private spaces, often being developed and deployed with little engagement from relevant stakeholders. Most notably, the individuals subject to the surveillance technology are rarely included in creating that technology. As an illustration of both prominence and controversy, one may consider the AI system developed by Clearview AI Inc. to monitor and record the activities of individuals and groups, including rapid face identification. Their system has come under close scrutiny for the ways that the organization scraped images and training data from the Internet; the company is currently under investigation in multiple jurisdictions for scraping billions of images from social media sites without users' consent [1, 2], and other companies like Facebook, Twitter, Venmo, and Google have issued cease and desist letters citing violations of their terms of service [3].

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found