Goto

Collaborating Authors

 commission


Robustness and Cybersecurity in the EU Artificial Intelligence Act

Nolte, Henrik, Rateike, Miriam, Finck, Michèle

arXiv.org Artificial Intelligence

The EU Artificial Intelligence Act (AIA) establishes different legal principles for different types of AI systems. While prior work has sought to clarify some of these principles, little attention has been paid to robustness and cybersecurity. This paper aims to fill this gap. We identify legal challenges and shortcomings in provisions related to robustness and cybersecurity for high-risk AI systems (Art. 15 AIA) and general-purpose AI models (Art. 55 AIA). We show that robustness and cybersecurity demand resilience against performance disruptions. Furthermore, we assess potential challenges in implementing these provisions in light of recent advancements in the machine learning (ML) literature. Our analysis informs efforts to develop harmonized standards, guidelines by the European Commission, as well as benchmarks and measurement methodologies under Art. 15(2) AIA. With this, we seek to bridge the gap between legal terminology and ML research, fostering a better alignment between research and implementation efforts.


Balancing Innovation and Integrity: AI Integration in Liberal Arts College Administration

Read, Ian Olivo

arXiv.org Artificial Intelligence

This paper explores the intersection of artificial intelligence and higher education administration, focusing on liberal arts colleges (LACs). It examines AI's opportunities and challenges in academic and student affairs, legal compliance, and accreditation processes, while also addressing the ethical considerations of AI deployment in mission-driven institutions. Considering AI's value pluralism and potential allocative or representational harms caused by algorithmic bias, LACs must ensure AI aligns with its mission and principles. The study highlights other strategies for responsible AI integration, balancing innovation with institutional values.


Sovereign Large Language Models: Advantages, Strategy and Regulations

Bondarenko, Mykhailo, Lushnei, Sviatoslav, Paniv, Yurii, Molchanovsky, Oleksii, Romanyshyn, Mariana, Filipchuk, Yurii, Kiulian, Artur

arXiv.org Artificial Intelligence

This report analyzes key trends, challenges, risks, and opp ortunities associated with the development of Large Language Models (LLMs) globally. It examines natio nal experiences in developing LLMs and assesses the feasibility of investment in this sector. Addi tionally, the report explores strategies for implementing, regulating, and financing AI projects at the s tate level. International experiences indicate that LLMs significantl y enhance administrative efficiency. In regulatory processes, they streamline the management of le gal documents (Albania, Serbia), facilitate communication between government authorities and citizen s (Netherlands), and support public procurement and legal translations (Albania).


Biden admin warns AI in schools may exhibit racial bias, anti-trans discrimination and trigger investigations

FOX News

Many people in Nashville say they don't trust artificial intelligence chatbots to give them unbiased information amid the backlash Google faces over its Gemini program. On Tuesday, the Department of Education's Office for Civil Rights (OCR) released presidentially-mandated guidance that lays out how schools' use of artificial intelligence (AI) can be discriminatory toward minority and transgender students, "likely" opening them up to federal investigations. President Biden signed Executive Order 14110 last year mandating that the Education Department develop resources, policies and guidance regarding AI in schools to help ensure responsible and non-discriminatory use, "including the impact AI systems have on vulnerable and underserved communities." "The growing use of AI in schools, including for instructional and school safety purposes, and AI's ability to operate on a mass scale can create or contribute to discrimination," the Education Department's guidance states. "This resource provides information regarding federal civil rights laws in OCR's jurisdiction and includes examples of types of incidents that could, depending on the facts and circumstances, present OCR with sufficient reason to open an investigation."


Unfair Automated Hiring Systems Are Everywhere

WIRED

Earlier this month, Lina Khan, chair of the US Federal Trade Commission (FTC), wrote an essay in The New York Times affirming the agency's commitment to regulating AI. But there was one AI application Khan didn't mention that the FTC urgently needs to regulate: automated hiring systems. These range in complexity from tools that merely parse resumes and rank them to systems that green-light candidates and trash applicants deemed unfit. Increasingly, working Americans are obligated to use them if they want to get hired. If you buy something using links in our stories, we may earn a commission.


EU approves Microsoft's takeover of Activision Blizzard

The Guardian

The EU has approved Microsoft's $69bn (£55bn) acquisition of the Call of Duty creator Activision Blizzard, in a move that puts Brussels at loggerheads with its UK counterpart over the gaming mega-deal. The EU accepted Microsoft's concessions on cloud gaming, the same problem that led the Competition and Markets Authority to block the transaction last month. The proposed deal would bring together Microsoft, the maker of the Xbox console, with the video game developer behind titles including World of Warcraft, Hearthstone, Candy Crush Saga and Overwatch. The move by the European Commission, the bloc's executive arm, will revive Microsoft's hopes for the deal as it prepares to appeal against the CMA's decision. The Federal Trade Commission in the US has also come out against the takeover and is suing to block it.


Biden administration is giving away America's AI dominance

FOX News

We're now seeing the Democrats say the quiet part out loud when it comes to artificial intelligence: they want to control AI development for political purposes. These efforts are not just going to further divide the country, but they will kneecap America's next decade of innovation. This assault on innovation is occurring at both the executive and legislative levels, where Democrats are using the novelty of AI to seize control over speech. Earlier this year, President Biden signed an executive order for agencies to "root out bias" by requiring diversity, equity and inclusion training for AI – ensuring any results are woke approved. This month, Vice President Kamala Harris is also getting in on the AI action by meeting with developers to ensure "equity" in AI.


Proposal for EU Artificial Intelligence Act Passes Next Level – Where Do We Stand and What's Next?

#artificialintelligence

Following multiple amendments and discussions, the EU Member States – the Council of the EU – approved a compromise version of the proposed Artificial Intelligence Regulation (AI Act) on December 6, 2022. Once adopted, the AI Act will be the first horizontal legislation in the EU to regulate AI systems, introducing rules for the safe and trustworthy placing on the EU market of products with an AI component. The Regulation's extraterritorial scope (i.e., application to providers and users outside the EU when the output produced by the system is used in the EU) and its exceptionally high fines of the higher of up to €30 million or up to 6 % of the company's total worldwide annual turnover for the preceding financial year, are expected to shape the regulatory requirements outside of the EU borders as has been the case with the European General Data Protection Regulation (GDPR). The first proposal for an AI Act was published by the European Commission (Commission) in April 2021. The current version of the AI Act will next have to be adopted by the European Parliament (Parliament).


Council Post: AI And Machine Learning In The Workplace: Preparing For 2023

#artificialintelligence

President & CEO of BBB National Programs, a non-profit organization dedicated to fostering a more accountable, trustworthy marketplace. In recent years, government scrutiny over the use of artificial intelligence (AI) tools in the recruiting and hiring process has risen. Since I wrote about this topic last year, there has been significant activity within several federal government agencies regarding the use of AI and machine learning in the employment context. A better understanding of these actions can help business leaders reduce their risk of legal liability and better understand how to use AI and machine learning responsibly in their organizations. The Equal Employment Opportunity Commission (EEOC) has been particularly active through its EEOC initiative on AI and algorithmic fairness and its joint HIRE initiative with the U.S. Department of Labor.


Africa prepares for age of robots - The Mail & Guardian

#artificialintelligence

The adoption of robotics and artificial intelligence (AI) in Africa received a major boost after Uniccon Group, an Abuja-based tech startup, unveiled the continent's first humanoid robot. Omeife, the 1.8m female human-like robot, is African by design and has Igbo-like physical attributes. The battery-powered robot can speak Igbo, Yoruba, English, French, Swahili, Wazobia, Pidgin, Afrikaans and Arabic with native accents. Uniccon Group chief executive Chuks Ekwueme said: "Omeife also identifies objects and calculates positions and distances of objects." The launch of Omeife comes a few months after Abdul Malik Tejan-Sie, a South African-based Sierra Leonean innovator, presented a prototype of South Africa's first humanoid robot.