sparky
Why Walmart and OpenAI Are Shaking Up Their Agentic Shopping Deal
After OpenAI's Instant Checkout feature fell short, Walmart is instead embedding its Sparky chatbot directly into ChatGPT and Google Gemini. Since November, Walmart has let some ChatGPT users order a limited selection of products without ever leaving OpenAI's chatbot interface. Sales have been disappointing, a Walmart executive vice president exclusively tells WIRED. The results suggest that a future where chatbots and AI agents take over ecommerce is still a way off, if it ever materializes. Last year, OpenAI made a bet that it could boost revenue by charging a commission on purchases made through ChatGPT.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
- Retail (1.00)
- Information Technology > Security & Privacy (0.47)
- Information Technology > Services > e-Commerce Services (0.35)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
How the (Tensor-) Brain uses Embeddings and Embodiment to Encode Senses and Decode Symbols
The tensor brain has been introduced as a computational model for perception and memory. We provide an overview of the tensor brain model, including recent developments. The tensor brain has two major layers: the representation layer and the index layer. The representation layer is a model for the subsymbolic global workspace from consciousness research. The state of the representation layer is the cognitive brain state. The index layer contains symbols for concepts, time instances, and predicates. In a bottom-up operation, the cognitive brain state is encoded by the index layer as symbolic labels. In a top-down operation, symbols are decoded and written to the representation layer. This feeds to earlier processing layers as embodiment. The top-down operation became the basis for semantic memory. The embedding vector of a concept forms the connection weights between its index and the representation layer. The embedding is the signature or ``DNA'' of a concept, which is decoded by the brain when its index is activated. It integrates all that is known about a concept from different experiences, modalities, and symbolic decodings. Although being computational, it has been suggested that the tensor brain might be related to the actual operation of the brain. The sequential nature of symbol generation might have been a prerequisite to the generation of natural language. We describe an attention mechanism and discuss multitasking by multiplexing. We emphasize the inherent multimodality of the tensor brain. Finally, we discuss embedded and symbolic reasoning.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States (0.04)
- (2 more...)
- Research Report (0.64)
- Overview (0.54)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Generation (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding
Tresp, Volker, Sharifzadeh, Sahand, Li, Hang, Konopatzki, Dario, Ma, Yunpu
We present a unified computational theory of perception and memory. In our model, perception, episodic memory, and semantic memory are realized by different functional and operational modes of the oscillating interactions between an index layer and a representation layer in a bilayer tensor network (BTN). The memoryless semantic {representation layer} broadcasts information. In cognitive neuroscience, it would be the "mental canvas", or the "global workspace" and reflects the cognitive brain state. The symbolic {index layer} represents concepts and past episodes, whose semantic embeddings are implemented in the connection weights between both layers. In addition, we propose a {working memory layer} as a processing center and information buffer. Episodic and semantic memory realize memory-based reasoning, i.e., the recall of relevant past information to enrich perception, and are personalized to an agent's current state, as well as to an agent's unique memories. Episodic memory stores and retrieves past observations and provides provenance and context. Recent episodic memory enriches perception by the retrieval of perceptual experiences, which provide the agent with a sense about the here and now: to understand its own state, and the world's semantic state in general, the agent needs to know what happened recently, in recent scenes, and on recently perceived entities. Remote episodic memory retrieves relevant past experiences, contributes to our conscious self, and, together with semantic memory, to a large degree defines who we are as individuals.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.05)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Health & Medicine > Consumer Health (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Text Processing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- (3 more...)
The Casual Marvel Fan's Guide to em WandaVision /em Episode 5
This article contains spoilers for the first five episodes of WandaVision. Let's start with the biggest question. What was the deal with "Pietro" at the end of the episode? That was Evan Peters reprising his role as the late Pietro Maximoff, Wanda's brother, but--and here's the twist--it's not the Pietro Maximoff we've seen in the Marvel Cinematic Universe. The MCU's Pietro, played by Aaron Taylor-Johnson, died in Avengers: Age of Ultron.
- Europe (0.05)
- Africa > Nigeria > Lagos State > Lagos (0.05)
- Media > Television (0.85)
- Media > Film (0.69)
- Leisure & Entertainment > Sports > Golf (0.30)
Hey, Sparky: Confused by data science governance and security in the cloud? Databricks promises to ease machine learning pipelines
Databricks, the company behind analytics tool Apache Spark, is introducing new features to ease the management of security, governance and administration of its machine learning platform. Security and data access rights have been fragmented between on-premises data, cloud instances and data platforms, Databricks told us. And the new approach allows tech teams to manage policies from a single environment and have them replicated in the cloud, it added. "Cloud companies have inherent native security controls, but it can be a very confusing journey for these customers moving from an on-premise[s] world where they have their own governance in place, controlling who has access to what, and then they move this up to the cloud and suddenly all the rules are different." The idea behind the new features is to allow users to employ the controls they are familiar with, for example, Active Directory to control data policies in Databricks.
- Information Technology > Security & Privacy (0.95)
- Information Technology > Services (0.72)
- Information Technology > Data Science (1.00)
- Information Technology > Cloud Computing (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (0.98)
- Information Technology > Security & Privacy (0.95)
Facial recognition experts perform the best with an AI sidekick
Scientists are working on a kickass new twist to the classic buddy cop movie genre. Get this: cyberterrorist Marcus Hurricane is going to walk free unless police detective Rick Danger can place him at the scene of the crime. But all he has to go on are some grainy security camera images, and he can't quite make out Hurricane's signature badass face scars. Enter: detective Danger's trusty AI cyborg sidekick, Sparky. Together, they have what it takes to save the day. But researchers did recently determine that, when it comes to difficult facial recognition tasks, a trained professional teamed up with an AI sidekick is better than a team of two human pros –or even an AI algorithm on its own.
Individual and Domain Adaptation in Sentence Planning for Dialogue
Walker, M. A., Stent, A., Mairesse, F., Prasad, R.
One of the biggest challenges in the development and deployment of spoken dialogue systems is the design of the spoken language generation module. This challenge arises from the need for the generator to adapt to many features of the dialogue domain, user population, and dialogue context. A promising approach is trainable generation, which uses general-purpose linguistic knowledge that is automatically adapted to the features of interest, such as the application domain, individual user, or user group. In this paper we present and evaluate a trainable sentence planner for providing restaurant information in the MATCH dialogue system. We show that trainable sentence planning can produce complex information presentations whose quality is comparable to the output of a template-based generator tuned to this domain. We also show that our method easily supports adapting the sentence planner to individuals, and that the individualized sentence planners generally perform better than models trained and tested on a population of individuals. Previous work has documented and utilized individual preferences for content selection, but to our knowledge, these results provide the first demonstration of individual preferences for sentence planning operations, affecting the content order, discourse structure and sentence structure of system responses. Finally, we evaluate the contribution of different feature sets, and show that, in our application, n-gram features often do as well as features based on higher-level linguistic representations.
- North America > United States > Pennsylvania > Philadelphia County > Philadelphia (0.14)
- Asia > India > Karnataka > Bengaluru (0.04)
- North America > United States > New York > Suffolk County > Stony Brook (0.04)
- (4 more...)
- Consumer Products & Services > Restaurants (1.00)
- Health & Medicine (0.92)