ai capability
Pentagon says US military to be an 'AI-first' fighting force
Pentagon says US military to be an'AI-first' fighting force The US military plans to increase its use of artificial intelligence (AI) further after the Pentagon agreed to new and expanded contracts with some of the biggest names in technology. Under eight agreements with Google, OpenAI, Amazon, Microsoft, SpaceX, Oracle, Nvidia and the start-up Reflection, the Pentagon said AI technology would now be used for any lawful operational use. These agreements accelerate the transformation [of] the US military as an AI-first fighting force, the Pentagon said. Conspicuous by its absence is Anthropic, as the company has said it is concerned about how the Pentagon could use its tools in warfare and domestically. The firm is now suing the government over the alleged retaliation it faced after refusing to accept any lawful use language in its own contract.
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
2026 AI Index Report released
The ninth edition of the Artificial Intelligence Index Report was published on 13 April 2026. Released on a yearly basis, the aim of the document is to provide readers with accurate, rigorously validated, and globally-sourced data to give insights into the progress of AI and its potential impact on society. The 2026 AI Index Report comprises nine chapters, covering: research and development, technical performance, responsible AI, economy, science, medicine, education, policy and governance, and public opinion. AI capability is accelerating and reaching more people than ever. Model performance continues to improve against benchmarks, and 80% of university students now use generative AI.
- North America > United States (0.12)
- Asia > China (0.06)
- Asia > South Korea (0.05)
- Information Technology > Artificial Intelligence > Natural Language (0.72)
- Information Technology > Artificial Intelligence > Machine Learning (0.71)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (0.62)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (0.52)
How to quickly create professional presentations with AI
When you purchase through links in our articles, we may earn a small commission. Try Adobe Acrobat Studio for free today! Communication is a central part of any business or creative endeavour. Whether its sharing information between colleagues or highlighting the advantages of new products and services to customers, getting the messaging right is an essential part of success. Traditionally, this could involve hours of painstaking work, preparing documents and then replicating their data into slides for presentations.
- Information Technology > Security & Privacy (0.78)
- Leisure & Entertainment > Games > Computer Games (0.59)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Hardware (0.93)
The Loss of Control Playbook: Degrees, Dynamics, and Preparedness
Stix, Charlotte, Hallensleben, Annika, Ortega, Alejandro, Pistillo, Matteo
This research report addresses the absence of an actionable definition for Loss of Control (LoC) in AI systems by developing a novel taxonomy and preparedness framework. Despite increasing policy and research attention, existing LoC definitions vary significantly in scope and timeline, hindering effective LoC assessment and mitigation. To address this issue, we draw from an extensive literature review and propose a graded LoC taxonomy, based on the metrics of severity and persistence, that distinguishes between Deviation, Bounded LoC, and Strict LoC. We model pathways toward a societal state of vulnerability in which sufficiently advanced AI systems have acquired or could acquire the means to cause Bounded or Strict LoC once a catalyst, either misalignment or pure malfunction, materializes. We argue that this state becomes increasingly likely over time, absent strategic intervention, and propose a strategy to avoid reaching a state of vulnerability. Rather than focusing solely on intervening on AI capabilities and propensities potentially relevant for LoC or on preventing potential catalysts, we introduce a complementary framework that emphasizes three extrinsic factors: Deployment context, Affordances, and Permissions (the DAP framework). Compared to work on intrinsic factors and catalysts, this framework has the unfair advantage of being actionable today. Finally, we put forward a plan to maintain preparedness and prevent the occurrence of LoC outcomes should a state of societal vulnerability be reached, focusing on governance measures (threat modeling, deployment policies, emergency response) and technical controls (pre-deployment testing, control measures, monitoring) that could maintain a condition of perennial suspension.
- Asia (1.00)
- Europe > United Kingdom (0.67)
- North America > United States > California (0.46)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.94)
- Information Technology > Artificial Intelligence > Applied AI (0.92)
Delivering securely on data and AI strategy
Most organizations feel the imperative to keep pace with continuing advances in AI capabilities, as highlighted in a recent MIT Technology Review Insights report . That clearly has security implications, particularly as organizations navigate a surge in the volume, velocity, and variety of security data. This explosion of data, coupled with fragmented toolchains, is making it increasingly difficult for security and data teams to maintain a proactive and unified security posture. Data and AI teams must move rapidly to deliver the desired business results, but they must do so without compromising security and governance. As they deploy more intelligent and powerful AI capabilities, proactive threat detection and response against the expanded attack surface, insider threats, and supply chain vulnerabilities must remain paramount. "I'm passionate about cybersecurity not slowing us down," says Melody Hildebrandt, chief technology officer at Fox Corporation, "but I also own cybersecurity strategy.
- North America > United States > Massachusetts (0.05)
- Asia > China (0.05)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.35)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.33)
An AI Capability Threshold for Rent-Funded Universal Basic Income in an AI-Automated Economy
We derive the first closed-form condition under which artificial intelligence (AI) capital profits could sustainably finance a universal basic income (UBI) without relying on new taxation or the creation of new jobs. In a Solow-Zeira task-automation economy with a CES aggregator $σ< 1$, we introduce an AI capability parameter that scales the productivity of automatable tasks and obtain a tractable expression for the AI capability threshold -- the minimum productivity of AI relative to pre-AI automation required for a balanced transfer. Using current U.S. economic parameters, we find that even in the conservative scenario where no new tasks or jobs emerge, AI systems would only need to reach only 5-7 times today's automation productivity to fund an 11%-of-GDP UBI. Our analysis also reveals some specific policy levers: raising public revenue share (e.g. profit taxation) of AI capital from the current 15% to about 33% halves the required AI capability threshold to attain UBI to 3 times existing automation productivity, but gains diminish beyond 50% public revenue share, especially if regulatory costs increase. Market structure also strongly affects outcomes: monopolistic or concentrated oligopolistic markets reduce the threshold by increasing economic rents, whereas heightened competition significantly raises it. These results therefore offer a rigorous benchmark for assessing when advancing AI capabilities might sustainably finance social transfers in an increasingly automated economy.
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Economy (1.00)
- Law > Taxation Law (0.68)
- Government > Tax (0.68)
The AI Risk Spectrum: From Dangerous Capabilities to Existential Threats
Grey, Markov, Segerie, Charbel-Raphaël
As AI systems become more capable, integrated, and widespread, understanding the associated risks becomes increasingly important. This paper maps the full spectrum of AI risks, from current harms affecting individual users to existential threats that could endanger humanity's survival. We organize these risks into three main causal categories. Misuse risks, which occur when people deliberately use AI for harmful purposes - creating bioweapons, launching cyberattacks, adversarial AI attacks or deploying lethal autonomous weapons. Misalignment risks happen when AI systems pursue outcomes that conflict with human values, irrespective of developer intentions. This includes risks arising through specification gaming (reward hacking), scheming and power-seeking tendencies in pursuit of long-term strategic goals. Systemic risks, which arise when AI integrates into complex social systems in ways that gradually undermine human agency - concentrating power, accelerating political and economic disempowerment, creating overdependence that leads to human enfeeblement, or irreversibly locking in current values curtailing future moral progress. Beyond these core categories, we identify risk amplifiers - competitive pressures, accidents, corporate indifference, and coordination failures - that make all risks more likely and severe. Throughout, we connect today's existing risks and empirically observable AI behaviors to plausible future outcomes, demonstrating how existing trends could escalate to catastrophic outcomes. Our goal is to help readers understand the complete landscape of AI risks. Good futures are possible, but they don't happen by default. Navigating these challenges will require unprecedented coordination, but an extraordinary future awaits if we do.
- North America > United States (1.00)
- Asia > China (0.14)
- Europe > Ukraine (0.14)
- (11 more...)
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- (12 more...)
The power of AI at your fingertips
The demands of modern life can make it hard to stay on top of things. Just when you've made time to work on that creative project, suddenly there are emails that need dealing with, tasks to manage, or scheduling that requires immediate attention, all of which makes it hard to remain productive and inspired. Well, with the latest Intel Core Ultra powered Windows PC with AI capabilities there are a wealth of features and capabilities purpose-built to streamline your workload and free up time to spend on the things that are the most important to you. AI might feel like a buzzword that's plastered over everything at the moment, and in some cases if does seem like it offers much apart from basic party tricks. But Microsoft's CoPilot, powered by Intel Core Ultra processors, is an exception, as it offers plenty of very helpful tools and features that can speed up your workflow, maximise your time and help you stay focussed.
Technical Requirements for Halting Dangerous AI Activities
Barnett, Peter, Scher, Aaron, Abecassis, David
The rapid development of AI systems poses unprecedented risks, including loss of control, misuse, geopolitical instability, and concentration of power. To navigate these risks and avoid worst-case outcomes, governments may proactively establish the capability for a coordinated halt on dangerous AI development and deployment. In this paper, we outline key technical interventions that could allow for a coordinated halt on dangerous AI activities. We discuss how these interventions may contribute to restricting various dangerous AI activities, and show how these interventions can form the technical foundation for potential AI governance plans.
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- North America > United States > California > Santa Clara County > Stanford (0.04)
- North America > Canada (0.04)
- Information Technology > Security & Privacy (1.00)
- Government (0.94)
Anchoring AI Capabilities in Market Valuations: The Capability Realization Rate Model and Valuation Misalignment Risk
Fang, Xinmin, Tao, Lingfeng, Li, Zhengxiong
Recent breakthroughs in artificial intelligence (AI) have triggered surges in market valuations for AI-related companies, often outpacing the realization of underlying capabilities. We examine the anchoring effect of AI capabilities on equity valuations and propose a Capability Realization Rate (CRR) model to quantify the gap between AI potential and realized performance. Using data from the 2023--2025 generative AI boom, we analyze sector-level sensitivity and conduct case studies (OpenAI, Adobe, NVIDIA, Meta, Microsoft, Goldman Sachs) to illustrate patterns of valuation premium and misalignment. Our findings indicate that AI-native firms commanded outsized valuation premiums anchored to future potential, while traditional companies integrating AI experienced re-ratings subject to proof of tangible returns. We argue that CRR can help identify valuation misalignment risk-where market prices diverge from realized AI-driven value. We conclude with policy recommendations to improve transparency, mitigate speculative bubbles, and align AI innovation with sustainable market value.
- Financial News (0.93)
- Research Report (0.70)
- Information Technology > Services (1.00)
- Banking & Finance > Trading (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.59)