ai provider
AI threats to national security can be countered through an incident regime
Recent progress in AI capabilities has heightened concerns that AI systems could pose a threat to national security, for example, by making it easier for malicious actors to perform cyberattacks on critical national infrastructure, or through loss of control of autonomous AI systems. In parallel, federal legislators in the US have proposed nascent 'AI incident regimes' to identify and counter similar threats. In this paper, we consolidate these two trends and present a timely proposal for a legally mandated post-deployment AI incident regime that aims to counter potential national security threats from AI systems. We start the paper by introducing the concept of 'security-critical' to describe sectors that pose extreme risks to national security, before arguing that 'security-critical' describes civilian nuclear power, aviation, life science dual-use research of concern, and frontier AI development. We then present in detail our AI incident regime proposal, justifying each component of the proposal by demonstrating its similarity to US domestic incident regimes in other 'security-critical' sectors. Finally, we sketch a hypothetical scenario where our proposed AI incident regime deals with an AI cyber incident. Our proposed AI incident regime is split into three phases. The first phase revolves around a novel operationalization of what counts as an 'AI incident' and we suggest that AI providers must create a 'national security case' before deploying a frontier AI system. The second and third phases spell out that AI providers should notify a government agency about incidents, and that the government agency should be involved in amending AI providers' security and safety procedures, in order to counter future threats to national security.
- Europe > United Kingdom (0.14)
- Europe > Russia (0.14)
- Europe > Belarus (0.14)
- (11 more...)
AICat: An AI Cataloguing Approach to Support the EU AI Act
Golpayegani, Delaram, Pandit, Harshvardhan J., Lewis, Dave
The European Union's Artificial Intelligence Act (AI Act) requires providers and deployers of high-risk AI applications to register their systems into the EU database, wherein the information should be represented and maintained in an easily-navigable and machine-readable manner. Given the uptake of open data and Semantic Web-based approaches for other EU repositories, in particular the use of the Data Catalogue vocabulary Application Profile (DCAT-AP), a similar solution for managing the EU database of high-risk AI systems is needed. This paper introduces AICat - an extension of DCAT for representing catalogues of AI systems that provides consistency, machine-readability, searchability, and interoperability in managing open metadata regarding AI systems. This open approach to cataloguing ensures transparency, traceability, and accountability in AI application markets beyond the immediate needs of high-risk AI compliance in the EU. AICat is available online at https://w3id.org/aicat under the CC-BY-4.0 license.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.14)
- Europe > Switzerland (0.04)
- Europe > Italy (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.36)
- Information Technology > Artificial Intelligence > Machine Learning (0.68)
- Information Technology > Communications > Web > Semantic Web (0.67)
An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards
Hernandez, Julio, Golpayegani, Delaram, Lewis, Dave
The many initiatives on trustworthy AI result in a confusing and multipolar landscape that organizations operating within the fluid and complex international value chains must navigate in pursuing trustworthy AI. The EU's AI Act will now shift the focus of such organizations toward conformance with the technical requirements for regulatory compliance, for which the Act relies on Harmonized Standards. Though a high-level mapping to the Act's requirements will be part of such harmonization, determining the degree to which standards conformity delivers regulatory compliance with the AI Act remains a complex challenge. Variance and gaps in the definitions of concepts and how they are used in requirements between the Act and harmonized standards may impact the consistency of compliance claims across organizations, sectors, and applications. This may present regulatory uncertainty, especially for SMEs and public sector bodies relying on standards conformance rather than proprietary equivalents for developing and deploying compliant high-risk AI systems. To address this challenge, this paper offers a simple and repeatable mechanism for mapping the terms and requirements relevant to normative statements in regulations and standards, e.g., AI Act and ISO management system standards, texts into open knowledge graphs. This representation is used to assess the adequacy of standards conformance to regulatory compliance and thereby provide a basis for identifying areas where further technical consensus development in trustworthy AI value chains is required to achieve regulatory compliance.
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Law > Statutes (0.68)
Germany and the EU Artificial Intelligence Act – AICGS
Dr. Axel Spies is a German attorney (Rechtsanwalt) in Washington, DC, and co-publisher of the German journals Multi-Media-Recht (MMR) and Zeitschrift für Datenschutz (ZD). The impact of the Artificial Intelligence Act (AIA) proposed by the European Commission, and currently debated at the European Parliament (EP), has been underestimated in the United States. With approximately 3,000 amendments that must be reconciled, the AIA represents the first attempt to regulate artificial intelligence (AI) by a uniform law from cradle to grave. The AIA focuses on the providers of AI services that put them on the market or use them for their own purposes. Germany is actively contributing to the debate: AI is mentioned in the federal government's Coalition Treaty as a "digital key technology" and a European AIA is generally supported.
- Europe > Germany (0.87)
- North America > United States > District of Columbia > Washington (0.25)
- North America > United States > California (0.05)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.92)
Leading lawmakers pitch extending scope of AI rulebook to the metaverse
The European Parliament's co-rapporteurs Dragoş Tudorache and Brando Benifei circulated two new batches of compromise amendments, seen by EURACTIV, on Wednesday (28 September), ahead of the technical discussion with the other political groups on Friday. These latest batches introduce significant changes to the regulation's scope, subject matter and obligations for high-risk AI systems concerning risk management, data governance and technical documentation. A new article has been added to extend the regulation's scope to AI system operators in specific metaverse environments that meet several cumulative conditions. These criteria are that the metaverse requires an authenticated avatar, is built for interaction on a large scale, allows social interactions similar to the real world, engages in real-world financial transactions and entails health or fundamental rights risks. The scope has been expanded from AI providers to any economic operators placing an AI system on the market or putting it into service.
- Government (1.00)
- Information Technology > Security & Privacy (0.75)
Mobotix Acquires AI Provider, Vaxtor Group
"This acquisition is a significant step in our strategy of strengthening our artificial intelligence and deep learning capabilities and, whilst Vaxtor will continue to operate as a standalone company, is based on compelling commercial and development synergies and clear strategic benefits for both parties," said Mobotix CEO Thomas Lausten. Based in Spain, Vaxtor has a global customer base across 50 countries, delivering camera agnostic and at the edge video analytics in various sectors. Its vide analytics technology allows the automated capture of letters, numbers or other machine and human readable data, and enables such information to be recorded and processed cost-effectively at high-speed so as to trigger related processes. According to Mobotix, this is a facilitator and accelerator for Mobotix's vertical market strategy, as the use of these technologies can be applied in areas such as government, retail and transportation sectors such as tracking of containers, vehicles and aircraft, as well as in logistics and manufacturing applications. In addition, Vaxtor's products are tailor made for the Mobotix 7 high performance camera platform, meaning its analytics apps can be run decentralized onboard the camera, removing the need for peripheral hardware.
Increased Data Security Using 'EzPC' In The Machine Learning Model Validation Process
Artificial intelligence (AI) has revolutionized various industries in the last decade, from manufacturing and logistics to agriculture and transportation--examples include improving predictive analytics on the manufacturing floor and making microclimate predictions to respond and save their crops in time. AI adoption is projected to accelerate in the following years, emphasizing the importance of an efficient adoption process that protects data privacy. Firms that want to incorporate AI into their workflow undergo a model validation process. They test or verify AI models from different suppliers before choosing the one that best matches their needs. This is typically done with a test dataset provided by the organization.
Can conversational AI make your customers happier? - Tech Wire Asia
Conversational AI (artificial intelligence) is now becoming a sought-after technology as businesses look to improve their response speed and also provide better services to customers. However, despite the advancements in technology to perfect conversational AI, businesses in Southeast Asia are still struggling with implementing the software in their business. The biggest problem for conversational AI in APAC is the language itself. Conversational AI refers to the use of artificial intelligence on chatbots or voice assistants. Using large volumes of data, machine learning, and natural language processing, the AI helps imitate human interaction by recognizing speech or text inputs and replying to them based on a set of predetermined replies.
Artificial intelligence: UK and EU take legislative steps - convergence or divergence?
In March this year, the UK government announced an assertive agenda on artificial intelligence (AI) by launching a UK Cyber Security Council and revealing plans to publish a National Artificial Intelligence Strategy (the UK Strategy). The details of the UK Strategy will be released later this year, but at this point we understand that it will focus in particular on promoting growth of the economy through widespread use of AI with, at the same time, an emphasis on ethical, safe, and trustworthy development of AI--including through the development of a legislative framework for AI which will promote public trust and a level playing field. Shortly after the UK government's announcement, the EU Commission published a proposed EU-wide AI legislative framework (the EU Regulation) which is part of the Commission's overall "AI package". The EU Regulation is focused on ensuring the safety of individuals and the protection of fundamental human rights, and categorises AI into unacceptable, high- or low-risk use cases. The EU Regulation proposes to protect users "where the risks that the AI systems pose are particularly high". The definition and categories of high-risk use cases of AI are broad, and capture many if not most use cases that relate to individuals, including AI use in the context of biometric identification and categorisation of natural persons, management of critical infrastructure, and employment and worker management.
European Union: New Draft Rules on the Use of Artificial Intelligence
On 21 April 2021, the European Commission published draft regulations ("AI Regulations") governing the use of artificial intelligence (AI). The European Parliament and the member states have not yet adopted these proposed AI Regulations. The European Commission's proposed AI Regulations are the first attempt the world has seen at creating a uniform legal framework governing the use, development and marketing of AI. They will likely have a resounding impact on all businesses that use AI for years to come. The AI Regulations will become effective 20 days after publication in the Official Journal.
- Law > Statutes (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.60)