Goto

Collaborating Authors

 marchand


Toward Scalable Visual Servoing Using Deep Reinforcement Learning and Optimal Control

Asayesh, Salar, Darani, Hossein Sheikhi, chen, Mo, Mehrandezh, Mehran, Gupta, Kamal

arXiv.org Artificial Intelligence

Classical pixel-based Visual Servoing (VS) approaches offer high accuracy but suffer from a limited convergence area due to optimization nonlinearity. Modern deep learning-based VS methods overcome traditional vision issues but lack scalability, requiring training on limited scenes. This paper proposes a hybrid VS strategy utilizing Deep Reinforcement Learning (DRL) and optimal control to enhance both convergence area and scalability. The DRL component of our approach separately handles representation and policy learning to enhance scalability, generalizability, learning efficiency and ease domain adaptation. Moreover, the optimal control part ensures high end-point accuracy. Our method showcases remarkable achievements in terms of high convergence rates and minimal end-positioning errors using a 7-DOF manipulator. Importantly, it exhibits scalability across more than 1000 distinct scenes. Furthermore, we demonstrate its capacity for generalization to previously unseen datasets. Lastly, we illustrate the real-world applicability of our approach, highlighting its adaptability through single-shot domain transfer learning in environments with noise and occlusions. Real-robot experiments can be found at \url{https://sites.google.com/view/vsls}.


Contributions \`a l'asservissement visuel et \`a l'imagerie en m\'edecine

Tamadazte, Brahim

arXiv.org Artificial Intelligence

This manuscript gives an overview of my research work carried out within the FEMTO-ST institute in Besan\c{c}on, more particularly in the Automatic and Micro-Mechatronic Systems (AS2M) department. It is above all the result of my (co)-supervision of interns, PhD students and postdocs. I would like to pay tribute to them, for their major contribution to scientific research, here and elsewhere.


Your voiceprint could be your new password as companies look to increase security for remote workers

#artificialintelligence

As working from home moves from a temporary solution to the new normal, companies need new ways to secure data and protect internal networks . Banks are most likely to use voiceprints to authenticate users but more companies are considering this approach. Nuance Communications uses a voiceprint algorithm powered by a deep neural network to analyze 1,000 parameters of an individual's voice, including tone, pitch, pacing and fluctuations in the sound. The engine determines which parameters are most relevant for each individual and weights the appropriate elements accordingly. Simon Marchand, chief fraud prevention officer at Nuance, worked in fraud prevention for 10 years in the financial and telecom industries.


Marchand: No need to send humans on pricey space trips

Boston Herald

It looks like mankind won't be going back to the moon … on schedule, at least. According to a recent report by the National Aeronautics and Space Administration's inspector general, astronaut suits have been delayed by two years due to an array of technical, funding and COVID-related challenges. But, the unavoidable conclusion, "a lunar landing in late 2024 as NASA currently plans is not feasible," is hardly surprising given NASA's string of failures in trying to take humanity back to the lunar surface. The failures also speak to a larger strategic mistake that places inordinate importance on planting flags on alien worlds despite the practical and scientific disadvantages of that approach. Humanity can venture to infinity and beyond while avoiding the black hole of wasteful spending.





Learning Stochastic Perceptrons Under k-Blocking Distributions

Marchand, Mario, Hadjifaradji, Saeed

Neural Information Processing Systems

I} when the probability distribution that generates the input examples is member of a family that we call k-blocking distributions. Such distributions represent an important step beyond the case where each input variable is statistically independent since the 2k-blocking family contains all the Markov distributions of order k. By stochastic percept ron we mean a perceptron which, upon presentation of input vector x, outputs 1 with probability fCLJi WiXi - B).