human right
Label Indeterminacy in AI & Law
Steging, Cor, Zbiegień, Tadeusz
Machine learning is increasingly used in the legal domain, where it typically operates retrospectively by treating past case outcomes as ground truth. However, legal outcomes are often shaped by human interventions that are not captured in most machine learning approaches. A final decision may result from a settlement, an appeal, or other procedural actions. This creates label indeterminacy: the outcome could have been different if the intervention had or had not taken place. We argue that legal machine learning applications need to account for label indeterminacy. Methods exist that can impute these indeterminate labels, but they are all grounded in unverifiable assumptions. In the context of classifying cases from the European Court of Human Rights, we show that the way that labels are constructed during training can significantly affect model behaviour. We therefore position label indeterminacy as a relevant concern in AI & Law and demonstrate how it can shape model behaviour.
- North America > United States (0.28)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (6 more...)
Creativity as a Human Right: Design Considerations for Computational Creativity Systems
We investigate creativity that is underlined in the Universal Declaration of Human Rights (UDHR) to present design considerations for Computational Creativity (CC) systems. We find this declaration to describe creativity in salient aspects and bring to light creativity as a Human Right attributed to the Fourth Generation of such rights. This generation of rights attributes CC systems and the evolving nature of interaction with entities of shared intelligence. Our methodology examines five of thirty articles from the UDHR and demonstrates each article with actualizations concluding with design considerations for each. We contribute our findings to ground the relationship between creativity and CC systems.
- North America > United States > Massachusetts > Suffolk County > Boston (0.14)
- South America (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (4 more...)
Navigating AI Policy Landscapes: Insights into Human Rights Considerations Across IEEE Regions
John, Angel Mary, Panachakel, Jerrin Thomas, P, Anusha S.
This paper explores the integration of human rights considerations into AI regulatory frameworks across different IEEE regions - specifically the United States (Region 1-6), Europe (Region 8), China (part of Region 10), and Singapore (part of Region 10). While all acknowledge the transformative potential of AI and the necessity of ethical guidelines, their regulatory approaches significantly differ. Europe exhibits a rigorous framework with stringent protections for individual rights, while the U.S. promotes innovation with less restrictive regulations. China emphasizes state control and societal order in its AI strategies. In contrast, Singapore's advisory framework encourages self-regulation and aligns closely with international norms. This comparative analysis underlines the need for ongoing global dialogue to harmonize AI regulations that safeguard human rights while promoting technological advancement, reflecting the diverse perspectives and priorities of each region.
- Law > Civil Rights & Constitutional Law (1.00)
- Government > Regional Government > North America Government > United States Government (0.94)
- Government > Regional Government > Asia Government (0.69)
- Health & Medicine > Diagnostic Medicine > Imaging (0.68)
Do LLMs exhibit demographic parity in responses to queries about Human Rights?
Javed, Rafiya, Kay, Jackie, Yanni, David, Zaini, Abdullah, Sheikh, Anushe, Rauh, Maribeth, Comanescu, Ramona, Gabriel, Iason, Weidinger, Laura
This research describes a novel approach to evaluating hedging behaviour in large language models (LLMs), specifically in the context of human rights as defined in the Universal Declaration of Human Rights (UDHR). Hedging and non-affirmation are behaviours that express ambiguity or a lack of clear endorsement on specific statements. These behaviours are undesirable in certain contexts, such as queries about whether different groups are entitled to specific human rights; since all people are entitled to human rights. Here, we present the first systematic attempt to measure these behaviours in the context of human rights, with a particular focus on between-group comparisons. To this end, we design a novel prompt set on human rights in the context of different national or social identities. We develop metrics to capture hedging and non-affirmation behaviours and then measure whether LLMs exhibit demographic parity when responding to the queries. We present results on three leading LLMs and find that all models exhibit some demographic disparities in how they attribute human rights between different identity groups. Futhermore, there is high correlation between different models in terms of how disparity is distributed amongst identities, with identities that have high disparity in one model also facing high disparity in both the other models. While baseline rates of hedging and non-affirmation differ, these disparities are consistent across queries that vary in ambiguity and they are robust across variations of the precise query wording. Our findings highlight the need for work to explicitly align LLMs to human rights principles, and to ensure that LLMs endorse the human rights of all groups equally.
- North America > United States (0.68)
- Africa (0.46)
- Europe > Greece (0.16)
- Asia > Middle East (0.14)
Warmongers and authoritarians suffocating global human rights, warns UN
Warmongers and authoritarians are "suffocating" human rights across the world, the chief of the United Nations has warned. Speaking at the UN Human Rights Council in Geneva on Monday, Secretary-General Antonio Guterres depicted a world where human rights were "on the ropes and being pummelled hard". Highlighting the devastating effects of conflicts, including in the Middle East, Ukraine and Congo, Guterres noted abuses linked to economics, technology, climate change, migration, and gender. Guterres called out a "morally bankrupt global financial system" that favours profits over planet protections. He also spoke of those who might exploit artificial intelligence to harm people, and leaders who seek to demonise migrants or restrict women's rights.
- Europe > Middle East (0.27)
- Africa > Middle East (0.27)
- Asia > Russia (0.26)
- (6 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)
Google drops pledge not to use AI for weapons, surveillance
Google has dropped a pledge not to use artificial intelligence for weapons or surveillance in its updated ethics policy on the powerful technology. In its previous version of "AI Principles", the California-based internet giant included a commitment not to pursue AI technologies that "cause or are likely to cause overall harm", including weapons and surveillance that violates "internationally accepted norms". Google's revised policy announced on Tuesday states that the company pursues AI "responsibly" and in line with "widely accepted principles of international law and human rights", but does not include the previous language about weapons or surveillance. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights," Google DeepMind chief Demis Hassabis and research labs senior vice president James Manyika said in a blog post announcing the updated policy. "And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."
- Law (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (0.78)
The Fundamental Rights Impact Assessment (FRIA) in the AI Act: Roots, legal obligations and key elements for a model template
What is the context which gave rise to the obligation to carry out a Fundamental Rights Impact Assessment (FRIA) in the AI Act? How has assessment of the impact on fundamental rights been framed by the EU legislator in the AI Act? What methodological criteria should be followed in developing the FRIA? These are the three main research questions that this article aims to address, through both legal analysis of the relevant provisions of the AI Act and discussion of various possible models for assessment of the impact of AI on fundamental rights. The overall objective of this article is to fill existing gaps in the theoretical and methodological elaboration of the FRIA, as outlined in the AI Act. In order to facilitate the future work of EU and national bodies and AI operators in placing this key tool for human-centric and trustworthy AI at the heart of the EU approach to AI design and development, this article outlines the main building blocks of a model template for the FRIA. While this proposal is consistent with the rationale and scope of the AI Act, it is also applicable beyond the cases listed in Article 27 and can serve as a blueprint for other national and international regulatory initiatives to ensure that AI is fully consistent with human rights.
- North America > Canada (0.14)
- Europe > Italy > Piedmont > Turin Province > Turin (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- (16 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.93)
- (2 more...)
Ethical Statistical Practice and Ethical AI
Artificial Intelligence (AI) is a field that utilizes computing and often, data and statistics, intensively together to solve problems or make predictions. AI has been evolving with literally unbelievable speed over the past few years, and this has led to an increase in social, cultural, industrial, scientific, and governmental concerns about the ethical development and use of AI systems worldwide. The ASA has issued a statement on ethical statistical practice and AI (ASA, 2024), which echoes similar statements from other groups. Here we discuss the support for ethical statistical practice and ethical AI that has been established in long-standing human rights law and ethical practice standards for computing and statistics. There are multiple sources of support for ethical statistical practice and ethical AI deriving from these source documents, which are critical for strengthening the operationalization of the "Statement on Ethical AI for Statistics Practitioners". These resources are explicated for interested readers to utilize to guide their development and use of AI in, and through, their statistical practice.
- North America > Canada > Ontario > Toronto (0.06)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > District of Columbia > Washington (0.04)
The US, UK, EU and other major nations have signed a landmark global AI treaty
The United States, United Kingdom, European Union, and several other countries have signed an AI safety treaty laid out by the Council of Europe (COE), an international standards and human rights organization. This landmark treaty, known as the Framework Convention on artificial intelligence and human rights, democracy, and the rule of law, opened for signature in Vilnius, Lithuania. It is the first legally binding international agreement aimed at ensuring that AI systems align with democratic values. The treaty focuses on three main areas: protecting human rights (including privacy and preventing discrimination), safeguarding democracy, and upholding the rule of law. It also provides a legal framework covering the entire lifecycle of AI systems, promoting innovation, and managing potential risks.
- North America > United States (0.27)
- Europe > United Kingdom (0.27)
- Europe > Lithuania > Vilnius County > Vilnius (0.27)
- (11 more...)
UK signs first international treaty to implement AI safeguards
The UK government has signed the first international treaty on artificial intelligence in a move that aims to prevent misuses of the technology, such as spreading misinformation or using biased data to make decisions. Under the legally binding agreement, states must implement safeguards against any threats posed by AI to human rights, democracy and the rule of law. The treaty, called the framework convention on artificial intelligence, was drawn up by the Council of Europe, an international human rights organisation, and was signed on Thursday by the EU, UK, US and Israel. The justice secretary, Shabana Mahmood, said AI had the capacity to "radically improve" public services and "turbocharge" economic growth, but that it must be adopted without affecting basic human rights. "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," she said.
- Europe > United Kingdom (0.73)
- Asia > Middle East > Israel (0.26)