commission
Robustness and Cybersecurity in the EU Artificial Intelligence Act
Nolte, Henrik, Rateike, Miriam, Finck, Michèle
The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems (Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.
- Africa > Kenya (0.28)
- Europe > Germany > Baden-Württemberg > Tübingen Region > Tübingen (0.14)
- North America > United States (0.14)
- (2 more...)
- Research Report (1.00)
- Overview (0.67)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (1.00)
Balancing Innovation and Integrity: AI Integration in Liberal Arts College Administration
This paper explores the intersection of artificial intelligence and higher education administration, focusing on liberal arts colleges (LACs). It examines AI's opportunities and challenges in academic and student affairs, legal compliance, and accreditation processes, while also addressing the ethical considerations of AI deployment in mission-driven institutions. Considering AI's value pluralism and potential allocative or representational harms caused by algorithmic bias, LACs must ensure AI aligns with its mission and principles. The study highlights other strategies for responsible AI integration, balancing innovation with institutional values.
- North America > United States > California (0.14)
- Asia > India (0.14)
- Europe > Switzerland (0.14)
- Instructional Material (0.67)
- Overview (0.67)
- Research Report (0.64)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
Sovereign Large Language Models: Advantages, Strategy and Regulations
Bondarenko, Mykhailo, Lushnei, Sviatoslav, Paniv, Yurii, Molchanovsky, Oleksii, Romanyshyn, Mariana, Filipchuk, Yurii, Kiulian, Artur
This report analyzes key trends, challenges, risks, and opp ortunities associated with the development of Large Language Models (LLMs) globally. It examines natio nal experiences in developing LLMs and assesses the feasibility of investment in this sector. Addi tionally, the report explores strategies for implementing, regulating, and financing AI projects at the s tate level. International experiences indicate that LLMs significantl y enhance administrative efficiency. In regulatory processes, they streamline the management of le gal documents (Albania, Serbia), facilitate communication between government authorities and citizen s (Netherlands), and support public procurement and legal translations (Albania).
- North America > United States (1.00)
- Asia > Middle East > Saudi Arabia (0.93)
- Europe > Netherlands (0.67)
- (16 more...)
- Law > Statutes (1.00)
- Information Technology > Services (1.00)
- Information Technology > Security & Privacy (1.00)
- (9 more...)
Without AI, We Won't Meet ESG Goals And Address Climate Change - Liwaiwai
The current state of ESG programmes is not making an adequate difference for climate change fast enough. AI can help provide comprehensive ESG management solutions, reporting capabilities and actionable emissions insights. AI can ingest huge amounts of data, pull signal from noise and give companies a roadmap to meet ESG goals that make a real difference. The world is in a precarious condition due to climate change. Not surprisingly, companies are facing immense pressure from investors and customers to improve their transparency and performance on ESG issues, and many are getting positive feedback for their success. But the current state…
Proposal for EU Artificial Intelligence Act Passes Next Level – Where Do We Stand and What's Next?
Following multiple amendments and discussions, the EU Member States – the Council of the EU – approved a compromise version of the proposed Artificial Intelligence Regulation (AI Act) on December 6, 2022. Once adopted, the AI Act will be the first horizontal legislation in the EU to regulate AI systems, introducing rules for the safe and trustworthy placing on the EU market of products with an AI component. The Regulation's extraterritorial scope (i.e., application to providers and users outside the EU when the output produced by the system is used in the EU) and its exceptionally high fines of the higher of up to €30 million or up to 6 % of the company's total worldwide annual turnover for the preceding financial year, are expected to shape the regulatory requirements outside of the EU borders as has been the case with the European General Data Protection Regulation (GDPR). The first proposal for an AI Act was published by the European Commission (Commission) in April 2021. The current version of the AI Act will next have to be adopted by the European Parliament (Parliament).
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Africa prepares for age of robots - The Mail & Guardian
The adoption of robotics and artificial intelligence (AI) in Africa received a major boost after Uniccon Group, an Abuja-based tech startup, unveiled the continent's first humanoid robot. Omeife, the 1.8m female human-like robot, is African by design and has Igbo-like physical attributes. The battery-powered robot can speak Igbo, Yoruba, English, French, Swahili, Wazobia, Pidgin, Afrikaans and Arabic with native accents. Uniccon Group chief executive Chuks Ekwueme said: "Omeife also identifies objects and calculates positions and distances of objects." The launch of Omeife comes a few months after Abdul Malik Tejan-Sie, a South African-based Sierra Leonean innovator, presented a prototype of South Africa's first humanoid robot.
- Africa > Nigeria > Federal Capital Territory > Abuja (0.26)
- Africa > Kenya (0.12)
- Africa > Mauritius (0.07)
- (3 more...)
The European legal approach to artificial intelligence: what will it mean for businesses?
The European Union (hereinafter "The EU") often leads the way in establishing comprehensive legal frameworks regarding novel issues. As a reminder, it was a pioneer in the area of data protection through its adoption of the EU Data Protection Directive as early as 1995, and more recently through its enactment of the General Data Protection Regulation (GDPR) in 2016, the most severe international law in the field of data protection. Similarly, the EU is currently pushing for the adoption of a detailed regulation for artificial intelligence (hereinafter "AI") systems, the Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts (hereinafter "the EU AI Act draft"). First presented in April 2021 by the European Commission, this law is a breakthrough endeavor which will surely have many repercussions, on the EU's level, but also internationally. Currently, in the sector of AI, the EU AI Act is a flagship initiative, which seeks to ensure the safety and trustworthiness of high-risk AI systems developed and used in the EU. It is the first law to solely address AI, and it is expected to become a "GDPR for AI".
- Law (1.00)
- Government > Regional Government > Europe Government (1.00)
Artificial Intelligence and Automated Systems Legal Update (3Q22)
This quarter marked demonstrable progress toward sector-specific approaches to the regulation of artificial intelligence and machine learning ("AI"). As the EU continues to inch toward finalizing its draft Artificial Intelligence Act--the landmark, cross-sector regulatory framework for AI/ML technologies--the White House published a "Blueprint for an AI Bill of Rights," a non-binding set of principles memorializing the Biden administration's approach to algorithmic regulation. The AI Bill of Rights joins a number of recent U.S. legislative proposals, both at the federal and state levels,[1] and the Federal Trade Commission's ("FTC") Advanced Notice of Proposed Rulemaking to solicit input on questions related to potentially harmful data privacy and security practices, including automated decision-making systems. Our 3Q22 Artificial Intelligence and Automated Systems Legal Update focuses on these regulatory efforts and also examines other policy developments within the U.S. and Europe. The past several years have seen a number of new algorithmic governance initiatives take shape at the federal level, building on the December 2020 Trustworthy AI Executive Order that outlined nine distinct principles to ensure agencies "design, develop, acquire and use AI in a manner that fosters public trust and confidence while protecting privacy."[2]
- Europe > United Kingdom (0.69)
- North America > United States > New York (0.06)
- North America > United States > California > Santa Clara County > Palo Alto (0.05)
- (6 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
Regulating the future: A look at the EU's plan to reboot product liability rules for AI
A recently presented European Union plan to update long-standing product liability rules for the digital age -- including addressing rising use of artificial intelligence (AI) and automation -- took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK's data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform'emotional analysis' -- urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts' use of blackbox AI systems to make sentencing decisions -- opaquely baking in bias and discrimination -- has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced.
- Law > Torts Law (1.00)
- Government > Regional Government > Europe Government (1.00)
- Law > Litigation (0.88)
Who Owns Copyright of AI-Generated Art?
AI image generators like Dalle are rightfully raising concerns from artists about creative ownership. AI generators are trained on millions of photos and learned to identify things from actual existing photos. These images are more likely to be public domain and not owned by anyone, and therefore, free to use. Look at the small print because laws are changing fast. GLIU AI and Visual Arts is a reader-supported publication.
- North America > United States > Virginia (0.05)
- North America > United States > New York (0.05)