liability directive
AI Regulation in Europe: From the AI Act to Future Regulatory Challenges
This chapter provides a comprehensive discussion on AI regulation in the European Union, contrasting it with the more sectoral and self-regulatory approach in the UK. It argues for a hybrid regulatory strategy that combines elements from both philosophies, emphasizing the need for agility and safe harbors to ease compliance. The paper examines the AI Act as a pioneering legislative effort to address the multifaceted challenges posed by AI, asserting that, while the Act is a step in the right direction, it has shortcomings that could hinder the advancement of AI technologies. The paper also anticipates upcoming regulatory challenges, such as the management of toxic content, environmental concerns, and hybrid threats. It advocates for immediate action to create protocols for regulated access to high-performance, potentially open-source AI systems. Although the AI Act is a significant legislative milestone, it needs additional refinement and global collaboration for the effective governance of rapidly evolving AI technologies.
- Asia > China (0.04)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- South America > Brazil (0.04)
- (5 more...)
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.97)
Computer says no. Will fairness survive in the AI age?
Hollywood has colourful notions about artificial intelligence (AI). The popular image is a future where robot armies spontaneously turn to malevolence, pitching humanity in a battle against extinction. In reality, the risks posed by AI today are more insidious and harder to unpick. They are often a by-product of the technology's seemingly endless application in modern society and increasing role in everyday life, perhaps best highlighted by Microsoft's latest multi-billion-dollar investment into ChatGPT-maker OpenAI. Either way, it's unsurprising that AI generates so much debate, not least in how we can build regulatory safeguards to ensure we master the technology, rather than surrender control to the machines. Right now, we tackle AI using a patchwork of laws and regulations, as well as guidance that doesn't have the force of law. Against this backdrop, it's clear that current frameworks are likely to change – perhaps significantly.
- Europe > United Kingdom (0.49)
- North America > United States (0.15)
- Law (1.00)
- Information Technology > Security & Privacy (0.98)
- Government > Regional Government (0.71)
Artificial Intelligence and Automated Systems Legal Update (3Q22)
This quarter marked demonstrable progress toward sector-specific approaches to the regulation of artificial intelligence and machine learning ("AI"). As the EU continues to inch toward finalizing its draft Artificial Intelligence Act--the landmark, cross-sector regulatory framework for AI/ML technologies--the White House published a "Blueprint for an AI Bill of Rights," a non-binding set of principles memorializing the Biden administration's approach to algorithmic regulation. The AI Bill of Rights joins a number of recent U.S. legislative proposals, both at the federal and state levels,[1] and the Federal Trade Commission's ("FTC") Advanced Notice of Proposed Rulemaking to solicit input on questions related to potentially harmful data privacy and security practices, including automated decision-making systems. Our 3Q22 Artificial Intelligence and Automated Systems Legal Update focuses on these regulatory efforts and also examines other policy developments within the U.S. and Europe. The past several years have seen a number of new algorithmic governance initiatives take shape at the federal level, building on the December 2020 Trustworthy AI Executive Order that outlined nine distinct principles to ensure agencies "design, develop, acquire and use AI in a manner that fosters public trust and confidence while protecting privacy."[2]
- Europe > United Kingdom (0.69)
- North America > United States > New York (0.06)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Regulating the future: A look at the EU's plan to reboot product liability rules for AI
A recently presented European Union plan to update long-standing product liability rules for the digital age -- including addressing rising use of artificial intelligence (AI) and automation -- took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK's data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform'emotional analysis' -- urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts' use of blackbox AI systems to make sentencing decisions -- opaquely baking in bias and discrimination -- has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced.
- Law > Torts Law (1.00)
- Government > Regional Government > Europe Government (1.00)
- Law > Litigation (0.88)
Who is liable for my racist robot? - Innovation Origins
Manufacturers of products that make use of artificial intelligence are liable for any eventual damage at all times. In an effort to provide users' rights with better protection, the European Commission is tightening the AI Liability Directive. This summer, the new Meta chatbot became the target of scorn. Just days after Blenderbot 3 of Facebook's parent company launched online in the United States, the self-learning program had degenerated into a racist spreader of fake news. The same thing happened in 2016 with the Tay chatbot developed by Microsoft which was designed to engage in conversations with real people on Twitter.
- Europe (0.44)
- North America > United States (0.25)
- Law > Civil Rights & Constitutional Law (0.62)
- Information Technology > Services (0.56)
- Government > Regional Government > Europe Government (0.44)
EU proposes rules making it easier to sue drone makers, AI systems
BRUSSELS, Sept 28 (Reuters) - The European Commission on Wednesday proposed rules making it easier for individuals and companies to sue makers of drones, robots and other products equipped with artificial intelligence software for compensation for harm caused by them. The AI Liability Directive aims to address the increasing use of AI-enabled products and services and the patchwork of national rules across the 27-country European Union. Under the draft rules, victims can seek compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology, or for discrimination in a recruitment process using AI. "We want the same level of protection for victims of damage caused by AI as for victims of old technologies," Justice Commissioner Didier Reynders told a news conference. The rules lighten the burden of proof on victims with a "presumption of causality", which means victims only need to show that a manufacturer or user's failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit.
EU Draft Rules Would Make It Easier to Sue Drone Makers, AI Systems
Individuals and companies that suffer harm from drones, robots and other products or services equipped with artificial intelligence software will find it easier to sue for compensation under EU draft rules seen by Reuters. The AI Liability Directive, which the European Commission will announce on Wednesday, aims to address the increasing proliferation of AI-enabled products and services and the patchwork of national rules across the 27-country European Union. Victims can sue for compensation for harm to their life, property, health and privacy due to the fault or omission of a provider, developer or user of AI technology or was discriminated in a recruitment process using AI, the draft rules said. The rules seek to lighten the burden of proof on victims by introducing a "presumption of causality," which means victims only need to show that a manufacturer or user's failure to comply with certain requirements caused the harm and then link this to the AI technology in their lawsuit. Under a "right of access to evidence," victims can ask a court to order companies and suppliers to provide information about high-risk AI systems so that they can identify the liable person and find out what went wrong.