Generative AI
Why Walmart and OpenAI Are Shaking Up Their Agentic Shopping Deal
After OpenAI's Instant Checkout feature fell short, Walmart is instead embedding its Sparky chatbot directly into ChatGPT and Google Gemini. Since November, Walmart has let some ChatGPT users order a limited selection of products without ever leaving OpenAI's chatbot interface. Sales have been disappointing, a Walmart executive vice president exclusively tells WIRED. The results suggest that a future where chatbots and AI agents take over ecommerce is still a way off, if it ever materializes. Last year, OpenAI made a bet that it could boost revenue by charging a commission on purchases made through ChatGPT.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Retail (1.00)
- Information Technology > Security & Privacy (0.47)
- Information Technology > Services > e-Commerce Services (0.35)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
The Download: The Pentagon's new AI plans, and next-gen nuclear reactors
The Download: The Pentagon's new AI plans, and next-gen nuclear reactors Plus: The OpenClaw frenzy has led to a new Nvidia product. The Pentagon plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, MIT Technology Review has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings, including for analyzing targets in Iran. But allowing them to train on and learn from classified data is a major new development that presents unique security risks. It would also bring AI firms closer to classified data than ever before. What do new nuclear reactors mean for waste?
- Asia > Middle East > Iran (0.26)
- Asia > China (0.06)
- South America > Colombia (0.05)
- (2 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.92)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.36)
The Pentagon is planning for AI companies to train on classified data, defense official says
The generative AI models used in classified environments can answer questions but don't currently learn from the data they see. The Pentagon is discussing plans to set up secure environments for generative AI companies to train military-specific versions of their models on classified data, has learned. AI models like Anthropic's Claude are already used to answer questions in classified settings; applications include analyzing targets in Iran. But allowing models to train on and learn from classified data would be a new development that presents unique security risks. It would mean sensitive intelligence like surveillance reports or battlefield assessments could become embedded into the models themselves, and it would bring AI firms into closer contact with classified data than before. Training versions of AI models on classified data is expected to make them more accurate and effective in certain tasks, according to a US defense official who spoke on background with .
- Asia > Middle East > Iran (0.25)
- North America > United States > Massachusetts (0.05)
- Information Technology (1.00)
- Government > Military (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.96)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.58)
DLSS 5 backlash: Nvidia's CEO says gamers are 'completely wrong'
Nvidia CEO Jensen Huang defends DLSS 5 against user backlash, calling critics "completely wrong" about the generative AI graphics technology's function. PCWorld notes the controversy stems from concerns that DLSS 5 applies an "AI skin" over game models rather than true enhancement. Huang clarifies DLSS 5 offers developers controllability at the geometry level, describing it as real-time neural rendering that infuses photorealism into pixels. In just a day, Nvidia's DLSS 5 technology has become the hot button for most of the PC and gaming world. Now Nvidia's chief executive has weighed in, claiming that everyone is "completely wrong" about the technology. At a question-and-answer session at Nvidia's own Game Technology Conference, Nvidia chief executive Jensen Huang said that "as I have explained very carefully, DLSS 5 fuses controllability of the of geometry and textures and everything about the game with generative AI," he said. Huang went on to say of the controversy: "They're completely wrong." Nvidia's DLSS 5 has sparked controversy because it essentially applies a generative AI filter to computer graphics. Nvidia describes DLSS 5 as a "real-time neural rendering model that infuses pixels with photoreal lighting and materials," and a "GPT moment for graphics -- blending hand-crafted rendering with generative AI".
- Leisure & Entertainment > Games > Computer Games (1.00)
- Information Technology > Hardware (1.00)
GPT-5.4 mini brings some of the smarts of OpenAI's latest model to ChatGPT Free and Go users
GPT-5.4 mini brings some of the smarts of OpenAI's latest model to ChatGPT Free and Go users The new model offers performance improvements in reasoning, multimodal understanding and more. The ChatGPT icon, as seen on iPhone 12 running iOS. When OpenAI released GPT-5.4 at the start of March, the company said the new model was designed primarily for professional work like programming and data analysis. Now OpenAI is launching GPT-5.4 mini and nano, and while it is once again highlighting the usefulness of these new systems for tasks like coding, one of the new models is available to Free and Go users . What's more, that model, GPT-5.4 mini, even offers performance that approaches GPT-5.4 in a handful of areas.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
The Human Skill That Eludes AI
Why can't language models write well? I n a certain, strange way, generative AI peaked with OpenAI's GPT-2 seven years ago. Little known to anyone outside of tech circles, GPT-2 excelled at producing unexpected answers. "You could be like, 'Continue this story:,' and GPT-2 would be like, ','" Katy Gero, a poet and computer scientist who has been experimenting with language models since 2017, told me. "The models won't do that anymore." AI leaders boast about their models' superhuman technical abilities.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.56)
The Download: OpenAI's US military deal, and Grok's CSAM lawsuit
Plus: China has approved the world's first commercial brain chip. Where OpenAI's technology could show up in Iran OpenAI has controversially agreed to give the Pentagon access to its AI. But where exactly could its tech show up, and which applications will its customers and employees tolerate? There's pressure to integrate it quickly with existing military tools. One defense official revealed it could even assist in selecting strike targets. OpenAI's partnership with Anduril, which makes drones and counter-drone technologies, adds another hint at what is to come.
- Asia > Middle East > Iran (0.26)
- Asia > China (0.26)
- South America > Brazil (0.05)
- (5 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.97)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.82)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.81)
Deep Generative Models with Learnable Knowledge Constraints
The broad set of deep generative models (DGMs) has achieved remarkable advances. However, it is often difficult to incorporate rich structured domain knowledge with the end-to-end DGMs. Posterior regularization (PR) offers a principled framework to impose structured constraints on probabilistic models, but has limited applicability to the diverse DGMs that can lack a Bayesian formulation or even explicit density evaluation. PR also requires constraints to be fully specified {\it a priori}, which is impractical or suboptimal for complex knowledge with learnable uncertain parts. In this paper, we establish mathematical correspondence between PR and reinforcement learning (RL), and, based on the connection, expand PR to learn constraints as the extrinsic reward in RL. The resulting algorithm is model-agnostic to apply to any DGMs, and is flexible to adapt arbitrary constraints with the model jointly. Experiments on human image generation and templated sentence generation show models with learned knowledge constraints by our algorithm greatly improve over base generative models.
Learning semantic similarity in a continuous space
We address the problem of learning semantic representation of questions to measure similarity between pairs as a continuous distance metric. Our work naturally extends Word Mover's Distance (WMD) [1] by representing text documents as normal distributions instead of bags of embedded words. Our learned metric measures the dissimilarity between two questions as the minimum amount of distance the intent (hidden representation) of one question needs to travel to match the intent of another question. We first learn to repeat, reformulate questions to infer intents as normal distributions with a deep generative model [2] (variational auto encoder). Semantic similarity between pairs is then learned discriminatively as an optimal transport distance metric (Wasserstein 2) with our novel variational siamese framework. Among known models that can read sentences individually, our proposed framework achieves competitive results on Quora duplicate questions dataset. Our work sheds light on how deep generative models can approximate distributions (semantic representations) to effectively measure semantic similarity with meaningful distance metrics from Information Theory.
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.51)
Bias and Generalization in Deep Generative Models: An Empirical Study
In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models of images by probing the learning algorithm with carefully designed training datasets. By measuring properties of the learned distribution, we are able to find interesting patterns of generalization. We verify that these patterns are consistent across datasets, common models and architectures.