general purpose ai model
The EU publishes the first draft of regulatory guidance for general purpose AI models
On Thursday, the European Union published its first draft of a Code of Practice for general purpose AI (GPAI) models. The document, which won't be finalized until May, lays out guidelines for managing risks -- and giving companies a blueprint to comply and avoid hefty penalties. The EU's AI Act came into force on August 1, but it left room to nail down the specifics of GPAI regulations down the road. This draft (via TechCrunch) is the first attempt to clarify what's expected of those more advanced models, giving stakeholders time to submit feedback and refine them before they kick in. GPAIs are those trained with a total computing power of over 10²⁵ FLOPs. Companies expected to fall under the EU's guidelines include OpenAI, Google, Meta, Anthropic and Mistral.
- Government (0.72)
- Information Technology > Security & Privacy (0.40)
The Artificial Intelligence Act: critical overview
This article provides a critical overview of the recently approved Artificial Intelligence Act. It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689. A definition of key concepts follows, and then the material and territorial scope, as well as the timing of application, are analyzed. Although the Regulation does not explicitly set out principles, the main ideas of fairness, accountability, transparency, and equity in AI underly a set of rules of the regulation. This is discussed before looking at the ill-defined set of forbidden AI practices (manipulation and e exploitation of vulnerabilities, social scoring, biometric identification and classification, and predictive policing). It is highlighted that those rules deal with behaviors rather than AI systems. The qualification and regulation of high-risk AI systems are tackled, alongside the obligation of transparency for certain systems, the regulation of general-purpose models, and the rules on certification, supervision, and sanctions. The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose of promoting responsible innovation within the European Union and beyond its borders.
- Asia > India (0.04)
- North America > United States > Wisconsin (0.04)
- Asia > China (0.04)
- (18 more...)
- Law > Statutes (1.00)
- Law > Government & the Courts (1.00)
- Law > Criminal Law (1.00)
- (6 more...)
- Europe (1.00)
- North America > United States (0.49)
- Asia > China (0.30)
- (2 more...)
The E.U. Has Passed the World's First Comprehensive AI Law
AI-generated deepfake pictures, video or audio of existing people, places or events must be labeled as artificially manipulated. There's extra scrutiny for the biggest and most powerful AI models that pose "systemic risks," which include OpenAI's GPT4 -- its most advanced system -- and Google's Gemini. The EU says it's worried that these powerful AI systems could "cause serious accidents or be misused for far-reaching cyberattacks." They also fear generative AI could spread "harmful biases" across many applications, affecting many people. Companies that provide these systems will have to assess and mitigate the risks; report any serious incidents, such as malfunctions that cause someone's death or serious harm to health or property; put cybersecurity measures in place; and disclose how much energy their models use. Brussels first suggested AI regulations in 2019, taking a familiar global role in ratcheting up scrutiny of emerging industries, while other governments scramble to keep up. In the U.S., President Joe Biden signed a sweeping executive order on AI in October that's expected to be backed up by legislation and global agreements. In the meantime, lawmakers in at least seven U.S. states are working on their own AI legislation.
- North America > United States (1.00)
- Europe (1.00)
- Asia > China (0.31)
- (2 more...)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (1.00)
- Government > Regional Government > North America Government > United States Government (0.90)
- Government > Military > Cyberwarfare (0.90)
How no-code, reusable AI will bridge the AI divide
In 1960, J.C.R. Licklider, an MIT professor and an early pioneer of artificial intelligence, already envisioned our future world in his seminal article, "Man-Computer Symbiosis": In the anticipated symbiotic partnership, men will set the goals, formulate the hypotheses, determine the criteria, and perform the evaluations. Computing machines will do the routinizable work that must be done to prepare the way for insights and decisions in technical and scientific thinking. In today's world, such "computing machines" are known as AI assistants. However, developing AI assistants is a complex, time-consuming process, requiring deep AI expertise and sophisticated programming skills, not to mention the efforts for collecting, cleaning, and annotating large amounts of data needed to train such AI assistants. It is thus highly desirable to reuse the whole or parts of an AI assistant across different applications and domains.
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.70)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.47)