Governing Artificial Intelligence

#artificialintelligence 

Technology companies should find effective channels of communication with local civil society groups and researchers, particularly in geographic areas where human rights concerns are high, in order to identify and respond to risks related to AI deployments. Technology companies and researchers should conduct Human Rights Impact Assessments (HRIAs) through the life cycle of their AI systems. Toolkits should be developed to assess specific industry needs. Governments should acknowledge their human rights obligations and incorporate a duty to protect fundamental rights in national AI policies, guidelines, and possible regulations. Governments can play a more active role in multilateral institutions, like the UN, to advocate for AI development that respects human rights.

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found