Large Language Model
Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems
Justice Department Says Anthropic Can't Be Trusted With Warfighting Systems In response to Anthropic's lawsuit, the government said it lawfully penalized the company for trying to limit how its Claude AI models could be used by the military. The Trump administration argued in a court filing on Tuesday that it did not violate Anthropic's First Amendment rights by designating the AI developer a supply-chain risk and predicted that the company's lawsuit against the government will fail. "The First Amendment is not a license to unilaterally impose contract terms on the government, and Anthropic cites nothing to support such a radical conclusion," US Department of Justice attorneys wrote. The response was filed in a federal court in San Francisco, one of two venues where Anthropic is challenging the Pentagon's decision to sanction the company with a label that can bar companies from defense contracts over concerns about potential security vulnerabilities. Anthropic argues the Trump administration overstepped its authority in applying the label and preventing the company's technologies from being used inside the department.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Asia > Middle East > Iran (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (3 more...)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.50)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.31)
The Pentagon is planning for AI companies to train on classified data, defense official says
The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .
- Asia > Middle East > Iran (0.25)
- North America > United States > Massachusetts (0.05)
- Information Technology (1.00)
- Government > Military (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)
GPT-5.4 mini brings some of the smarts of OpenAI's latest model to ChatGPT Free and Go users
GPT-5.4 mini brings some of the smarts of OpenAI's latest model to ChatGPT Free and Go users The new model offers performance improvements in reasoning, multimodal understanding and more. The ChatGPT icon, as seen on iPhone 12 running iOS. When OpenAI released GPT-5.4 at the start of March, the company said the new model was designed primarily for professional work like programming and data analysis. Now OpenAI is launching GPT-5.4 mini and nano, and while it is once again highlighting the usefulness of these new systems for tasks like coding, one of the new models is available to Free and Go users . What's more, that model, GPT-5.4 mini, even offers performance that approaches GPT-5.4 in a handful of areas.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
The Human Skill That Eludes AI
Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
The Download: OpenAI's US military deal, and Grok's CSAM lawsuit
Plus: China has approved the world's first commercial brain chip. Where OpenAI's technology could show up in Iran OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate? There's pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI's partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come.
- Asia > Middle East > Iran (0.26)
- Asia > China (0.26)
- South America > Brazil (0.05)
- (5 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.97)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.82)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.81)
AI Confessions: A Chatbot Ended My Marriage
Your stories about how AI is impacting your mental health, decision-making, and relationships. Please enable javascript to get your Slate Plus feeds. If you can't access your feeds, please contact customer support. Check your phone for a link to finish setting up your feed. Please enter a valid phone number.
- Marketing (0.82)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.37)
UK must learn lessons from AI race and retain its quantum computing talent, says minister
In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. In quantum computers, the information is contained in qubits that can work through vast numbers of different outcomes, which is not possible with classical computers. The UK will not let quantum computing talent slip through its fingers and must learn lessons from US dominance of the AI race, the technology secretary has said, as the government announced a £1bn quantum funding pledge. Liz Kendall said the government hoped to retain homegrown quantum startups, engineers and researchers rather than lose them to competing countries, with the US stealing a march on its western rivals in AI. "I do look at what's happened on AI," said Kendall. "I do think we need to learn the lessons and make sure we give our brilliant scientists, spinouts and startups the ability to stay here and make it happen. And that requires a government that is bold and ambitious and confident in these technologies of the future."
- Europe > United Kingdom (0.16)
- Europe > Ukraine (0.07)
- Europe > Spain (0.06)
- (2 more...)
- Leisure & Entertainment > Sports (0.72)
- Government > Regional Government (0.52)
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (0.75)
- Information Technology > Communications > Mobile (0.50)
- (2 more...)
U.S. court rules against South Korean gaming firm over AI-hatched takeover plan
A U.S. judge has ordered South Korean game developer Krafton to reinstate the head of one of its video game studios after ruling that he had been improperly removed as part of a takeover plan hatched by ChatGPT. WILMINGTON, DELAWARE - A Delaware judge on Monday ordered that South Korean game developer Krafton reinstate the head of one of its video game studios, ruling he had been improperly removed as part of a takeover plan hatched by ChatGPT. Krafton CEO Changhan Kim had largely followed the advice of artificial intelligence tool ChatGPT during a $250 million dispute with the leaders of the Subnautica game maker Unknown Worlds Entertainment, which Krafton had acquired, according to the ruling by Vice Chancellor Lori Will of the Court of Chancery in Delaware. Businesses and governments are scrambling for new ways to use AI, and the technology has been blamed for mass layoffs, fears of autonomous weapons and concerns about civil rights. Companies caught in takeover-related legal battles often spend millions of dollars on teams of attorneys and advisers from top-flight Wall Street firms. In a time of both misinformation and too much information, quality journalism is more crucial than ever.
- Asia > South Korea (0.94)
- Asia > Middle East > Iran (0.53)
- Asia > Taiwan (0.42)
- (7 more...)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Law (1.00)
- Information Technology > Communications > Social Media (0.78)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.68)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.68)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.68)
Zero-Shot Transfer with Deictic Object-Oriented Representation in Reinforcement Learning
Object-oriented representations in reinforcement learning have shown promise in transfer learning, with previous research introducing a propositional object-oriented framework that has provably efficient learning bounds with respect to sample complexity. However, this framework has limitations in terms of the classes of tasks it can efficiently learn. In this paper we introduce a novel deictic object-oriented framework that has provably efficient learning bounds and can solve a broader range of tasks. Additionally, we show that this framework is capable of zero-shot transfer of transition dynamics across tasks and demonstrate this empirically for the Taxi and Sokoban domains.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.33)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.33)
AI firm Anthropic seeks weapons expert to stop users from 'misuse'
AI firm Anthropic seeks weapons expert to stop users from'misuse' The US artificial intelligence (AI) firm Anthropic is looking to hire a chemical weapons and high-yield explosives expert to try to prevent catastrophic misuse of its software. In other words, it fears that its AI tools might tell someone how to make chemical or radioactive weapons, and wants an expert to ensure its guardrails are sufficiently robust. In the LinkedIn recruitment post, the firm says applicants should have a minimum of five years experience in chemical weapons and/or explosives defence as well as knowledge of radiological dispersal devices - also known as dirty bombs. The firm told the BBC the role was similar to jobs in other sensitive areas that it has already created. Anthropic is not the only AI firm adopting this strategy.
- North America > United States (1.00)
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- (16 more...)
- Leisure & Entertainment (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)