Goto

Collaborating Authors

 grandson


Man's parents helped him attack his ex and pry their grandson out of her arms, officials say

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Man's parents helped him attack his ex and pry their grandson out of her arms, officials say The 1-year-old boy who allegedly was taken from his mother at knifepoint in City of Industry on Sunday was found in Arizona. This is read by an automated voice. Please report any issues or inconsistencies here . A 20-year-old man and his parents allegedly attacked his ex-partner outside a Target store, forcibly taking their baby from her arms.



Are Large Language Models Capable of Deep Relational Reasoning? Insights from DeepSeek-R1 and Benchmark Comparisons

So, Chi Chiu, Sun, Yueyue, Wang, Jun-Min, Yung, Siu Pang, Loh, Anthony Wai Keung, Chau, Chun Pong

arXiv.org Artificial Intelligence

How far are Large Language Models (LLMs) in performing deep relational reasoning? In this paper, we evaluate and compare the reasoning capabilities of three cutting-edge LLMs, namely, DeepSeek-R1, DeepSeek-V3 and GPT-4o, through a suite of carefully designed benchmark tasks in family tree and general graph reasoning. Our experiments reveal that DeepSeek-R1 consistently achieves the highest F1-scores across multiple tasks and problem sizes, demonstrating strong aptitude in logical deduction and relational inference. However, all evaluated models, including DeepSeek-R1, struggle significantly as problem complexity increases, largely due to token length limitations and incomplete output structures. A detailed analysis of DeepSeek-R1's long Chain-of-Thought responses uncovers its unique planning and verification strategies, but also highlights instances of incoherent or incomplete reasoning, calling attention to the need for deeper scrutiny into LLMs' internal inference dynamics. We further discuss key directions for future work, including the role of multimodal reasoning and the systematic examination of reasoning failures. Our findings provide both empirical insights and theoretical implications for advancing LLMs' reasoning abilities, particularly in tasks that demand structured, multi-step logical inference. Our code repository will be publicly available at https://github.com/kelvinhkcs/Deep-Relational-Reasoning.


We need to prepare for 'addictive intelligence'

MIT Technology Review

Will it be easier to retreat to a replicant of a deceased partner than to navigate the confusing and painful realities of human relationships? Indeed, the AI companionship provider Replika was born from an attempt to resurrect a deceased best friend and now provides companions to millions of users. Even the CTO of OpenAI warns that AI has the potential to be "extremely addictive." We're seeing a giant, real-world experiment unfold, uncertain what impact these AI companions will have either on us individually or on society as a whole. Will Grandma spend her final neglected days chatting with her grandson's digital double, while her real grandson is mentored by an edgy simulated elder?


Improved Logical Reasoning of Language Models via Differentiable Symbolic Programming

Zhang, Hanlin, Huang, Jiani, Li, Ziyang, Naik, Mayur, Xing, Eric

arXiv.org Artificial Intelligence

Pre-trained large language models (LMs) struggle to perform logical reasoning reliably despite advances in scale and compositionality. In this work, we tackle this challenge through the lens of symbolic programming. We propose DSR-LM, a Differentiable Symbolic Reasoning framework where pre-trained LMs govern the perception of factual knowledge, and a symbolic module performs deductive reasoning. In contrast to works that rely on hand-crafted logic rules, our differentiable symbolic reasoning framework efficiently learns weighted rules and applies semantic loss to further improve LMs. DSR-LM is scalable, interpretable, and allows easy integration of prior knowledge, thereby supporting extensive symbolic programming to robustly derive a logical conclusion. The results of our experiments suggest that DSR-LM improves the logical reasoning abilities of pre-trained language models, resulting in a significant increase in accuracy of over 20% on deductive reasoning benchmarks. Furthermore, DSR-LM outperforms a variety of competitive baselines when faced with systematic changes in sequence length.


AI is making a long-running scam even more effective - AIVAnet

#artificialintelligence

You've no doubt heard of the scam where the perpetrator calls up an elderly person and pretends to be their grandchild or some other close relative. The usual routine is to act in a distressed state, pretend they're in a sticky situation, and ask for an urgent cash transfer to resolve the situation. While many grandparents will realize the voice isn't that of their grandchild and hang up, others won't notice and, only too keen to help their anxious relative, go ahead and send money to the caller's account. A Washington Post report on Sunday reveals that some scammers have taken the con to a whole new level by deploying AI technology capable of cloning voices, making it even more likely that the target will fall for the ruse. To launch this more sophisticated version of the scam, criminals require "an audio sample with just a few sentences," according to the Post.


The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

Zhang, Hanlin, Zhang, Yi-Fan, Li, Li Erran, Xing, Eric

arXiv.org Artificial Intelligence

Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations (or ``chain-of-thought'' (CoT)) for in-context learning. On the other hand, these reasoning tasks are usually presumed to be more approachable for symbolic programming. To make progress towards understanding in-context learning, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from knowledge bases (KBs). Then we revisit neuro-symbolic approaches and use Language Models as Logic Programmer (LMLP) that learns from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog's backward chaining algorithm. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more than 25% higher accuracy than CoT on length generalization benchmarks even with fewer parameters.


Listen - Science Friday

#artificialintelligence

Listen to Science Friday live on Fridays from 2-4 p.m. ET The most famous patient in neuroscience is the subject of a new book by the grandson of the man who changed his brain forever. Plus, a tour of the particles that could lie outside the Standard Model, and a look at automation in the workforce. City officials plan to repurpose Olympic structures as schools, dormitories, and community parks. What could sterile neutrinos, gravitons, and axions tell us about the Standard Model? A group proposes 20 science-based policy questions for the presidential candidates to address in the months ahead.


Android version of literary giant Natsume Soseki to return to alma mater to lecture

The Japan Times

Nishogakusha University is getting a new professor, an android version of literary giant Natsume Soseki that will teach classes in commemoration of the opening of the 140-year-old institution next year. This year also marks the centennial of the death of the novelist who studied Chinese literature at the private university in Tokyo in 1881. In cooperation with Hiroshi Ishiguro, a robotics researcher at Osaka University who is famous for creating an android of himself, the university plans to have the Soseki robot recite the author's own works, as well as some Chinese poems, from next April. "It's often said that high school students today don't read books," said Kaori Echigoya, a spokeswoman for the university, which also runs junior and senior high schools. "We value Japanese language education. By recreating Soseki through the help of professor Ishiguro, we would like to nurture interest in reading and literature among students."