gpai
Upstream and Downstream AI Safety: Both on the Same River?
McDermid, John, Jia, Yan, Habli, Ibrahim
Traditional safety engineering assesses systems in their context of use, e.g. the operational design domain (road layout, speed limits, weather, etc.) for self-driving vehicles (including those using AI). We refer to this as downstream safety. In contrast, work on safety of frontier AI, e.g. large language models which can be further trained for downstream tasks, typically considers factors that are beyond specific application contexts, such as the ability of the model to evade human control, or to produce harmful content, e.g. how to make bombs. We refer to this as upstream safety. We outline the characteristics of both upstream and downstream safety frameworks then explore the extent to which the broad AI safety community can benefit from synergies between these frameworks. For example, can concepts such as common mode failures from downstream safety be used to help assess the strength of AI guardrails? Further, can the understanding of the capabilities and limitations of frontier AI be used to inform downstream safety analysis, e.g. where LLMs are fine-tuned to calculate voyage plans for autonomous vessels? The paper identifies some promising avenues to explore and outlines some challenges in achieving synergy, or a confluence, between upstream and downstream safety frameworks.
- North America > United States (0.46)
- Europe > France (0.04)
- Europe > Germany (0.04)
- (3 more...)
- Transportation (1.00)
- Health & Medicine (1.00)
- Government (1.00)
- (2 more...)
The Dilemma of Uncertainty Estimation for General Purpose AI in the EU AI Act
Valdenegro-Toro, Matias, Stoykova, Radina
The AI act is the European Union-wide regulation of AI systems. It includes specific provisions for general-purpose AI models which however need to be further interpreted in terms of technical standards and state-of-art studies to ensure practical compliance solutions. This paper examines the AI act requirements for providers and deployers of general-purpose AI and further proposes uncertainty estimation as a suitable measure for legal compliance and quality assurance in training of such models. We argue that uncertainty estimation should be a required component for deploying models in the real world, and under the EU AI Act, it could fulfill several requirements for transparency, accuracy, and trustworthiness. However, generally using uncertainty estimation methods increases the amount of computation, producing a dilemma, as computation might go over the threshold ($10^{25}$ FLOPS) to classify the model as a systemic risk system which bears more regulatory burden.
- North America > Canada (0.28)
- Europe > Austria > Vienna (0.14)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
ChatGPT And More: Large Scale AI Models Entrench Big Tech Power - AI Now Institute
These narratives distract from what we call the "pathologies of scale" that become more entrenched every day: large-scale AI models are still largely controlled by Big Tech firms because of the enormous computing and data resources they require, and also present well-documented concerns around discrimination, privacy and security vulnerabilities, and negative environmental impacts. Large-scale AI models like Large Language Models (LLMs) have received the most hype, and fear-mongering, over the past year. "Opinion You Can Have the Blue Pill or the Red Pill, and We're Out of Blue Pills." Greg Noone, "'Foundation models' may be the future of AI. They're also deeply flawed," Tech Monitor, November 11, 2021 (updated February 9, 2023); Dan McQuillan, "We Come to Bury ChatGPT, Not to Praise It," danmcquillan.org,
- Europe (0.69)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Media (0.94)
- Government > Regional Government > Europe Government (0.47)
India selects Gnani.ai CEO as representative at global AI partnership
Ganesh Gopalan, chief executive officer of Indian voice biometrics firm Gnani.ai will be the country's representative at the Global Partnership on Artificial Intelligence (GPAI). In an announcement, Gnani.ai said its CEO will be part of the multistakeholder collaboration working to bridge the gap between theory and practice in the field of AI by enhancing innovative research and carrying out applied initiatives on key AI-related priorities. "It is a privilege to be invited to join GPAI and participate in the creation of ethical AI solutions that can positively impact society. At Gnani.ai, our goal is to create AI that is transparent, ethical, and inclusive and I am excited to exchange knowledge with other experts in the field and contribute our own insights," says Gopalan. The Gnani.ai executive also expressed gratitude to India's Minister of State for Electronics and Information Technology Shri Rajeev Chandrasekhar for his motivation.
India to assume the Chair of Global Partnership on Artificial Intelligence
India will take over the chair of the Global Partnership on Artificial Intelligence (GPAI) from France, the outgoing Council Chair on November 21, 2022 at a meeting to be hold in Tokyo. The Minister of State for Electronics & Information Technology and Skill Development & Entrepreneurship, Rajeev Chandrasekhar will represent India at the GPAI meeting. GPAI is an international initiative to support responsible and human-centric development and use of Artificial Intelligence (AI). This development comes on the heels of assuming the presidency of G20, a league of world's largest economies. GPAI is a congregation of 25 member countries, including the US, the UK, EU, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, Republic of Korea, and Singapore.
- Asia > India (1.00)
- Europe > France (0.54)
- North America > Canada (0.34)
- (8 more...)
La veille de la cybersécurité
The EU's proposed Artificial Intelligence Act plans to restrict open-source AI. The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years. Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.
Why the EU's Artificial Intelligence Act could harm innovation
The EU's proposed Artificial Intelligence Act plans to restrict open-source AI. The proposed – and still debated – Artificial Intelligence Act (AIA) from the EU touches upon the regulation of open-source AI. But enforcing strict restrictions on the sharing and distribution of open-source general-purpose AI (GPAI) is a completely retrograde step. It is like rewinding the world back by 30 years. Open-source culture is the only reason why mankind was able to progress technology at such a light speed. Only recently AI researchers were able to embrace sharing their code for more transparency and verification but putting constraints on this movement will damage the cultural progress the scientific community has made.
The Four Steps to Combating Climate Change With AI (Part I) - UX Connections
Unprecedented heat waves, long droughts, intense floods, and biodiversity losses--the signs are all around us: humanity has locked itself in a long, drawn-out war against climate change. The vital signs of the planet are fluctuating, and only timely climate action involving governments, corporations and individuals can keep our ecosystems from incurring irreversible damage. Fortunately, the tools powered by artificial intelligence can play an instrumental role in preserving our planet and its biodiversity for posterity. Artificial intelligence technology has been a subject of interest in the international discourse regarding sustainable development for years. The Global Partnership on Artificial Intelligence (GPAI) is a multi-stakeholder forum for world governments and experts leading the effort to devise strategies to bring AI solutions to aid climate action.
Israel joins international artificial intelligence group
Israel joined the Global Partnership on Artificial Intelligence today, Nov. 11, becoming the 20th member of the organization created two years ago under French and Canadian leadership. Four other countries were also accepted into the organization and four countries saw their candidacy rejected, at least for the moment, at the GPAI's annual event. The GPAI headquarters are located within the OECD in Paris, with another hub in Canada. He explained to Al-Monitor that the organization is made up of countries with advanced artificial intelligence technologies that believe in the values of equality and democracy promoted by the OECD. "Artificial intelligence has been a much-debated topic worldwide, also generating fears. These technologies bring about very positive impacts and possibilities, but are also quite complex and could be sensitive to society. And so the states wanted to have a multi-stakeholder initiative that could advise them and make recommendations," he said.
- Asia > Middle East > Israel (0.81)
- North America > Canada (0.36)
- Europe > France (0.07)
- Government > Foreign Policy (0.31)
- Government > Commerce (0.31)
Regulation of AI Remains Elusive
Despite the a wave of national strategies on artificial intelligence that has washed over the world, none have yet proposed or published specific ethical or legal frameworks for artificial intelligence. Over the past several years, a wave of national strategies on artificial intelligence (AI) has washed over the world, with many jurisdictions introducing policies for its regulation. With the exception of the European Union (EU), none have yet proposed or published specific ethical or legal frameworks for AI. Canada led the way, announcing national AI policies in 2017, and has since been followed by many other jurisdictions. The Organization for Economic Co-operation and Development (OECD) AI Policy Observatory early last year released a continuously updated database of over 600 AI policy initiatives from 60 countries, territories, and the EU. Of course, not all are the same, but some are noteworthy.
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.35)
- Government > Regional Government > North America Government > United States Government (0.30)