Goto

Collaborating Authors

 motion


Bias in Motion: Theoretical Insights into the Dynamics of Bias in SGD Training

Neural Information Processing Systems

Machine learning systems often acquire biases by leveraging undesired features in the data, impacting accuracy variably across different sub-populations of the data. However, our current understanding of bias formation mostly focuses on the initial and final stages of learning, leaving a gap in knowledge regarding the transient dynamics. To address this gap, this paper explores the evolution of bias in a teacher-student setup that models different data sub-populations with a Gaussian-mixture model. We provide an analytical description of the stochastic gradient descent dynamics of a linear classifier in this setup, which we prove to be exact in high dimension.Notably, our analysis identifies different properties of the sub-populations that drive bias at different timescales and hence shows a shifting preference of our classifier during training. By applying our general solution to fairness and robustness, we delineate how and when heterogeneous data and spurious features can generate and amplify bias.


Learning Physics-Based Full-Body Human Reaching and Grasping from Brief Walking References

Li, Yitang, Lin, Mingxian, Lin, Zhuo, Deng, Yipeng, Cao, Yue, Yi, Li

arXiv.org Artificial Intelligence

Existing motion generation methods based on mocap data are often limited by data quality and coverage. In this work, we propose a framework that generates diverse, physically feasible full-body human reaching and grasping motions using only brief walking mocap data. Base on the observation that walking data captures valuable movement patterns transferable across tasks and, on the other hand, the advanced kinematic methods can generate diverse grasping poses, which can then be interpolated into motions to serve as task-specific guidance. Our approach incorporates an active data generation strategy to maximize the utility of the generated motions, along with a local feature alignment mechanism that transfers natural movement patterns from walking data to enhance both the success rate and naturalness of the synthesized motions. By combining the fidelity and stability of natural walking with the flexibility and generalizability of task-specific generated data, our method demonstrates strong performance and robust adaptability in diverse scenes and with unseen objects.


Improving Language Models for Emotion Analysis: Insights from Cognitive Science

Bonard, Constant, Cortal, Gustave

arXiv.org Artificial Intelligence

We propose leveraging cognitive science research on emotions and communication to improve language models for emotion analysis. First, we present the main emotion theories in psychology and cognitive science. Then, we introduce the main methods of emotion annotation in natural language processing and their connections to psychological theories. We also present the two main types of analyses of emotional communication in cognitive pragmatics. Finally, based on the cognitive science research presented, we propose directions for improving language models for emotion analysis. We suggest that these research efforts pave the way for constructing new annotation schemes and a possible benchmark for emotional understanding, considering different facets of human emotion and communication.


Robotics: Nicolas Mansard, coordinator of the MEMMO project, winner of the Stars of Europe - Actu IA

#artificialintelligence

Created in 2013, the Stars of Europe awards recognize the coordinators of European collaborative research projects. On December 6, Sylvie Retailleau, Minister of Higher Education and Research, presented trophies to twelve winners at a ceremony at the Quai Branly Museum. Among them, Nicolas Mansard, CNRS researcher in robotics at LAAS-CNRS, holder of the ANITI chair " Artificial and natural movement", rewarded for the coordination of the MEMMO (memory of motion) project. Funded by the Horizon 2020 program over a four-year period, MEMMO (Memory of Motion) is a collaborative project initiated in 2018 that brought together a consortium of 10 European partners for a budget of €4 million: LAAS-CNRS (France), IDIAP (Switzerland), University of Edinburgh (UK), Max Planck Institute (Germany), Oxford University (UK), Trento University (IT), PAL-Robotics (Spain), Wandercraft (France), Airbus (France), Costain (UK) and APAJH (France). "I would like to thank the people who helped me coordinate this project. It is a project put together by a consortium of young researchers. It was a great pride for me to be chosen to coordinate this project. "We wanted to prove that it was possible to generate complex motions for arbitrary robots with arms and legs interacting in a dynamic environment in real time.


Motion Inspired Unsupervised Perception and Prediction in Autonomous Driving

Najibi, Mahyar, Ji, Jingwei, Zhou, Yin, Qi, Charles R., Yan, Xinchen, Ettinger, Scott, Anguelov, Dragomir

arXiv.org Artificial Intelligence

Learning-based perception and prediction modules in modern autonomous driving systems typically rely on expensive human annotation and are designed to perceive only a handful of predefined object categories. This closed-set paradigm is insufficient for the safety-critical autonomous driving task, where the autonomous vehicle needs to process arbitrarily many types of traffic participants and their motion behaviors in a highly dynamic world. To address this difficulty, this paper pioneers a novel and challenging direction, i.e., training perception and prediction models to understand open-set moving objects, with no human supervision. Our proposed framework uses self-learned flow to trigger an automated meta labeling pipeline to achieve automatic supervision. 3D detection experiments on the Waymo Open Dataset show that our method significantly outperforms classical unsupervised approaches and is even competitive to the counterpart with supervised scene flow. We further show that our approach generates highly promising results in open-set 3D detection and trajectory prediction, confirming its potential in closing the safety gap of fully supervised systems.


Global Big Data Conference

#artificialintelligence

Motion, a startup automating task planning with AI, today announced that it raised $13 million in a Series A round led by SignalFire, with participation from 468 Capital and notable angels, including OpenAI co-founder Sam Altman. Motion CEO Harry Qi says that the new cash will be put toward product development and engineering as well as overall hiring. Qi, who co-launched Motion in 2019 alongside Omid Rooholfada and Ethan Yu, estimates that knowledge workers spend 58% of their day on average coordinating work instead of actually accomplishing it. He believes that, if this constant coordination can be minimized, four-hour workdays would become just as productive as the standard eight-hour. "Omid and I were high school friends, and Ethan and I were college friends," Qi told TechCrunch via email.


Motion wants to automate task planning using AI

#artificialintelligence

Motion, a startup automating task planning with AI, today announced that it raised $13 million in a Series A round led by SignalFire, with participation from 468 Capital and notable angels, including OpenAI co-founder Sam Altman. Motion CEO Harry Qi says that the new cash will be put toward product development and engineering as well as overall hiring. Qi, who co-launched Motion in 2019 alongside Omid Rooholfada and Ethan Yu, estimates that knowledge workers spend 58% of their day on average coordinating work instead of actually accomplishing it. He believes that, if this constant coordination can be minimized, four-hour workdays would become just as productive as the standard eight-hour. "Omid and I were high school friends, and Ethan and I were college friends," Qi told TechCrunch via email.


[100%OFF] ROS For Beginners: Basics, Motion, And OpenCV

#artificialintelligence

This is the best-seller course in ROS on Udemy. My course has been upgraded to the latest version of ROS, ROS Noetic, with several new videos explaining the fundamental concepts of ROS with hands-on illustrations. It will also give you the required skills to later learn ROS2 and navigation stack, as presented in my two other courses. Why am I teaching this course? Typically, new ROS users encounter many difficulties when they start programming with ROS.


A Simple Introduction to Complex Stochastic Processes

@machinelearnbot

Stochastic processes have many applications, including in finance and physics. It is an interesting model to represent many phenomena. Unfortunately the theory behind it is very difficult, making it accessible to a few'elite' data scientists, and not popular in business contexts. One of the most simple examples is a random walk, and indeed easy to understand with no mathematical background. However, time-continuous stochastic processes are always defined and studied using advanced and abstract mathematical tools such as measure theory, martingales, and filtration.


A Simple Introduction to Complex Stochastic Processes

#artificialintelligence

Stochastic processes have many applications, including in finance and physics. It is an interesting model to represent many phenomena. Unfortunately the theory behind it is very difficult, making it accessible to a few'elite' data scientists, and not popular in business contexts. One of the most simple examples is a random walk, and indeed easy to understand with no mathematical background. However, time-continuous stochastic processes are always defined and studied using advanced and abstract mathematical tools such as measure theory, martingales, and filtration.