handshake
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Robots (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.46)
Key Decision-Makers in Multi-Agent Debates: Who Holds the Power?
Zhang, Qian, Zheng, Yan, Liu, Jinyi, Liang, Hebin, Wang, Lanjun
Recent studies on LLM agent scaling have highlighted the potential of Multi-Agent Debate (MAD) to enhance reasoning abilities. However, the critical aspect of role allocation strategies remains underexplored. In this study, we demonstrate that allocating roles with differing viewpoints to specific positions significantly impacts MAD's performance in reasoning tasks. Specifically, we find a novel role allocation strategy, "Truth Last", which can improve MAD performance by up to 22% in reasoning tasks. To address the issue of unknown truth in practical applications, we propose the Multi-Agent Debate Consistency (MADC) strategy, which systematically simulates and optimizes its core mechanisms. MADC incorporates path consistency to assess agreement among independent roles, simulating the role with the highest consistency score as the truth. We validated MADC across a range of LLMs (9 models), including the DeepSeek-R1 Distilled Models, on challenging reasoning tasks. MADC consistently demonstrated advanced performance, effectively overcoming MAD's performance bottlenecks and providing a crucial pathway for further improvements in LLM agent scaling.
- Europe > Austria > Vienna (0.14)
- Asia > Thailand > Bangkok > Bangkok (0.04)
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.49)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.46)
Diverse Conventions for Human-AI Collaboration
Players have to manage the ingredients, use the stove, and deliver meals. As the team works together, they decide how tasks should be allocated among themselves so resources are used effectively. For example, player 1 could notice that player 2 tends to stay near the stove, so they instead spend more time preparing ingredients and delivering food, allowing player 2 to continue working at the stove. Through these interactions, the team creates a "convention" in the
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Robots (0.93)
- Information Technology > Artificial Intelligence > Natural Language (0.68)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents > Agent Societies (0.46)
SBSC: Step-By-Step Coding for Improving Mathematical Olympiad Performance
Singh, Kunal, Biswas, Ankan, Bhowmick, Sayandeep, Moturi, Pradeep, Gollapalli, Siva Kishore
We propose Step-by-Step Coding (SBSC): a multi-turn math reasoning framework that enables Large Language Models (LLMs) to generate sequence of programs for solving Olympiad level math problems. At each step/turn, by leveraging the code execution outputs and programs of previous steps, the model generates the next sub-task and the corresponding program to solve it. This way, SBSC, sequentially navigates to reach the final answer. SBSC allows more granular, flexible and precise approach to problem-solving compared to existing methods. Extensive experiments highlight the effectiveness of SBSC in tackling competition and Olympiad-level math problems. For Claude-3.5-Sonnet, we observe SBSC (greedy decoding) surpasses existing state-of-the-art (SOTA) program generation based reasoning strategies by absolute 10.7% on AMC12, 8% on AIME and 12.6% on MathOdyssey. Given SBSC is multi-turn in nature, we also benchmark SBSC's greedy decoding against self-consistency decoding results of existing SOTA math reasoning strategies and observe performance gain by absolute 6.2% on AMC, 6.7% on AIME and 7.4% on MathOdyssey.
- Workflow (1.00)
- Research Report > New Finding (0.92)
Touch in Human Social Robot Interaction: Systematic Literature Review with PRISMA Method
Tsirka, Christiana, Velentza, Anna-Maria, Fachantidis, Nikolaos
In the past two decades, there has been a continuous rise in the deployment of robots fulfilling social roles that expands across various industries such as guides, service providers, and educators. To establish robots as integral allies in daily life, it is essential for them to deliver positive and trustworthy experiences, achieved through seamless and satisfying interactions across diverse modalities and communication channels. In the realm of human-robot interactions, touch plays a pivotal role in facilitating meaningful connections and communication. To delve into the significance of haptic technologies and their impact on interactions between humans and social robots, an exploration of the existing literature is essential, since the research about touch is the most underrepresented between the other communication channels (facial expressions, movements, vocals etc). A systematic literature review has been carried out, identifying 42 articles with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), related to touch and haptic technologies and interaction between humans and social robots in the twenty years (2001 -2023). The results show the main differences, pros and cons between the materials and technologies that have been primary used so far, the qualitative and quantitative research that links the HRI touch studies with the human emotion and also the types of touch and repeatability of those methods. The study identifies research gaps and outlines future directions, while it serves as a guide for anyone who will be interesting in conducting HRI touch research or build a haptic system for a social robot.
- North America > United States > New York > New York County > New York City (0.05)
- Europe > North Macedonia (0.04)
- Europe > Switzerland > Basel-City > Basel (0.04)
- (2 more...)
- Overview (1.00)
- Research Report > New Finding (0.48)
Learning Human-Robot Handshaking Preferences for Quadruped Robots
Chappuis, Alessandra, Bellegarda, Guillaume, Ijspeert, Auke
Quadruped robots are showing impressive abilities to navigate the real world. If they are to become more integrated into society, social trust in interactions with humans will become increasingly important. Additionally, robots will need to be adaptable to different humans based on individual preferences. In this work, we study the social interaction task of learning optimal handshakes for quadruped robots based on user preferences. While maintaining balance on three legs, we parameterize handshakes with a Central Pattern Generator consisting of an amplitude, frequency, stiffness, and duration. Through 10 binary choices between handshakes, we learn a belief model to fit individual preferences for 25 different subjects. Our results show that this is an effective strategy, with 76% of users feeling happy with their identified optimal handshake parameters, and 20% feeling neutral. Moreover, compared with random and test handshakes, the optimized handshakes have significantly decreased errors in amplitude and frequency, lower Dynamic Time Warping scores, and improved energy efficiency, all of which indicate robot synchronization to the user's preferences. Video results can be found at https://youtu.be/elvPv8mq1KM .
AI Learns to Predict Human Behavior from Videos
New York, NY--June 28, 2021--Predicting what someone is about to do next based on their body language comes naturally to humans but not so for computers. When we meet another person, they might greet us with a hello, handshake, or even a fist bump. We may not know which gesture will be used, but we can read the situation and respond appropriately. In a new study, Columbia Engineering researchers unveil a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects. "Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours," said Carl Vondrick, assistant professor of computer science at Columbia, who directed the study, which was presented at the International Conference on Computer Vision and Pattern Recognition on June 24, 2021. "Our results open a number of possibilities for human-robot collaboration, autonomous vehicles, and assistive technology."
AI learns to predict human behavior from videos
Predicting what someone is about to do next based on their body language comes naturally to humans but not so for computers. When we meet another person, they might greet us with a hello, handshake, or even a fist bump. We may not know which gesture will be used, but we can read the situation and respond appropriately. In a new study, Columbia Engineering researchers unveil a computer vision technique for giving machines a more intuitive sense for what will happen next by leveraging higher-level associations between people, animals, and objects. "Our algorithm is a step toward machines being able to make better predictions about human behavior, and thus better coordinate their actions with ours," said Carl Vondrick, assistant professor of computer science at Columbia, who directed the study, which was presented at the International Conference on Computer Vision and Pattern Recognition on June 24, 2021.
The End of Handshakes--for Humans and for Robots
Elenoide the android was made to shake your hand. She looks like a Madame Tussad's rendition of a prim fifth-grade teacher. She's dressed in a salmon cardigan with scalloped edges, a knee-length striped skirt, and a wig made of ashy blonde human hair. Her hands are warmed by heating pads hidden beneath the palms. During experiments, she wears white butler gloves.
Zoom Can't Give You the Comfort of a Hug, but Other Technologies Can
Armed with a bottle of Lysol and rolls of paper towels, Anya Fetcher packed up her car with enough food to get her through a road trip, and clothes to last several weeks, and headed to a friend's home. The first thing she did when she arrived was ask for a hug. "He started to pull away and I was like, 'Wait, can we just stay here for another second? It's been four weeks since [I've had] any kind of human contact,' " she told me. Thanks to the pandemic, a month of no physical interaction with another human--no hugs, no handshakes, no high-fives or fist bumps--had taken a toll on her mental health.
- North America > United States > Maine (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- (2 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.49)
- Transportation > Ground > Road (0.35)