silver
- Asia > Middle East > Jordan (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- (2 more...)
Fast Task Planning with Neuro-Symbolic Relaxation
Du, Qiwei, Li, Bowen, Du, Yi, Su, Shaoshu, Fu, Taimeng, Zhan, Zitong, Zhao, Zhipeng, Wang, Chen
Real-world task planning requires long-horizon reasoning over large sets of entities with complex relationships and attributes, leading to a combinatorial explosion for classical symbolic planners. To prune the search space, recent methods prioritize searching on a simplified task only containing a few "important" entities predicted by a neural network. However, such a simple neuro-symbolic (NeSy) integration risks omitting critical entities and wasting resources on unsolvable simplified tasks. To enable Fast and reliable planning, we introduce a NeSy relaxation strategy (Flax), combining neural importance prediction with symbolic expansion. Specifically, we first learn a graph neural network to predict entity importance to create a simplified task and solve it with a symbolic planner. Then, we solve a rule-relaxed task to obtain a quick rough plan, and reintegrate all referenced entities into the simplified task to recover any overlooked but essential elements. Finally, we apply complementary rules to refine the updated task, keeping it both reliable and compact. Extensive experiments are conducted on both synthetic and real-world maze navigation benchmarks where a robot must traverse through a maze and interact with movable objects. The results show that Flax boosts the average success rate by 20.82% and cuts mean wall-clock planning time by 17.65% compared with the state-of-the-art NeSy baseline. We expect that Flax offers a practical path toward fast, scalable, long-horizon task planning in complex environments.
- North America > United States > New York > Erie County > Buffalo (0.04)
- North America > United States > Massachusetts > Middlesex County > Lowell (0.04)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Information Technology > Artificial Intelligence > Robots > Robot Planning & Action (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Search (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Planning & Scheduling (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
AI-generated voice of former narrator Jim Fagan to be featured next NBA season, NBC Sports says
James Harden scored 7 points during the Los Angeles Clippers' Game 7 loss to the Denver Nuggets. Nick Wright and Kevin Wildes discuss Harden's history of choking in the playoffs. NBA fans' viewing experience will look different later this year, but there will also be a touch of nostalgia. Last summer, Comcast/NBC Universal closed an 11-year agreement for the rights to regular and postseason NBA and WNBA games. Those games will be presented across the network's linear and streaming platforms beginning with the 2025-26 season.
- North America > United States > California > Los Angeles County > Los Angeles (0.25)
- North America > United States > West Virginia (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.05)
Reviews: Correlation in Extensive-Form Games: Saddle-Point Formulation and Benchmarks
This was an overall nice and fun paper to read. The problem was well-motivated and the background well-covered. Possibly some important definitions were skipped (such as sequence form) but with space constraints I feel the authors chose the right level of abstraction for presentation (leaving the right amount of detail for the appendix). One (fairly minor) concern is that the scope is a bit narrow. There is a small community working on these notions of equilibria, and addressing only the 2-player case without chance feels like a bit restrictive, given previous simple algorithms that solve the n-player setting such as Dudik & Gordon.
Vision Language Models See What You Want but not What You See
Gao, Qingying, Li, Yijiang, Lyu, Haiyun, Sun, Haoran, Luo, Dezhi, Deng, Hokin
Knowing others' intentions and taking others' perspectives are two core components of human intelligence typically considered as instantiations of theory of mind. Infiltrating machines with these abilities is an important step towards building human-level artificial intelligence. We here investigate intentionality understanding and perspective-taking in Vision Language Models and, for the purpose, we have created IntentBench and PerspectBench datasets, which contain over 400 cognitive experiments grounded in real-world scenarios and classic cognitive tasks. Surprisingly, we find that VLMs achieve high performance in intentionality understanding but lower performance in perspective-taking using our two datasets. This challenges the common belief in the cognitive science literature that perspective-taking at the corresponding modality is necessary for intentionality understanding.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > North Carolina (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (4 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine > Therapeutic Area > Neurology (0.47)
- Education (0.46)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.73)
This Popular Theory About Why Democrats Lost Has Some Glaring Holes
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily. What's wrong with these darn institutions, and why does nobody trust them? That's the question lurking behind every postmortem about why Democrats lost the 2024 presidential election and what they could do to start winning future ones. The thinking goes like this: Donald Trump, as a political figure, represents blowing up the status quo; Trump won and the incumbent vice president lost; ergo, a majority of voters are unhappy with the people and groups responsible for the status quo. But the evidence that residents of the United States don't trust their institutions goes beyond election results.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > California (0.04)
- Europe > Russia (0.04)
- Asia > Russia (0.04)
Why do dogs lick humans? It could be a sign of affection.
Between humans, a kiss on the mouth or cheek is a clear signal of warm feelings. There's no single definitive answer, though canine cognition experts have theories. "If we want to distill it down to one thing, it's communication," says Ellen Furlong, an associate professor of psychology and neuroscience at Transylvania University in Kentucky, where she studies dog behavior. Dogs are highly social and well-attuned to humans. If a pup is interacting with you, it's often with purpose.
Interpreting the Learned Model in MuZero Planning
Guei, Hung, Ju, Yan-Ru, Chen, Wei-Yu, Wu, Ti-Rong
MuZero has achieved superhuman performance in various games by using a dynamics network to predict environment dynamics for planning, without relying on simulators. However, the latent states learned by the dynamics network make its planning process opaque. This paper aims to demystify MuZero's model by interpreting the learned latent states. We incorporate observation reconstruction and state consistency into MuZero training and conduct an in-depth analysis to evaluate latent states across two board games: 9x9 Go and Outer-Open Gomoku, and three Atari games: Breakout, Ms. Pacman, and Pong. Our findings reveal that while the dynamics network becomes less accurate over longer simulations, MuZero still performs effectively by using planning to correct errors. Our experiments also show that the dynamics network learns better latent states in board games than in Atari games. These insights contribute to a better understanding of MuZero and offer directions for future research to improve the playing performance, robustness, and interpretability of the MuZero algorithm.