Goto

Collaborating Authors

 efference copy



Self-Supervised Learning Through Efference Copies

Neural Information Processing Systems

Self-supervised learning (SSL) methods aim to exploit the abundance of unlabelled data for machine learning (ML), however the underlying principles are often method-specific. An SSL framework derived from biological first principles of embodied learning could unify the various SSL methods, help elucidate learning in the brain, and possibly improve ML. SSL commonly transforms each training datapoint into a pair of views, uses the knowledge of this pairing as a positive (i.e. Here, we show that this type of self-supervision is an incomplete implementation of a concept from neuroscience, the Efference Copy (EC). Specifically, the brain also transforms the environment through efference, i.e. motor commands, however it sends to itself an EC of the full commands, i.e. more than a mere SSL sign. In addition, its action representations are likely egocentric.


TTCDist: Fast Distance Estimation From an Active Monocular Camera Using Time-to-Contact

Burner, Levi, Sanket, Nitin J., Fermüller, Cornelia, Aloimonos, Yiannis

arXiv.org Artificial Intelligence

Distance estimation from vision is fundamental for a myriad of robotic applications such as navigation, manipulation, and planning. Inspired by the mammal's visual system, which gazes at specific objects, we develop two novel constraints relating time-to-contact, acceleration, and distance that we call the $\tau$-constraint and $\Phi$-constraint. They allow an active (moving) camera to estimate depth efficiently and accurately while using only a small portion of the image. The constraints are applicable to range sensing, sensor fusion, and visual servoing. We successfully validate the proposed constraints with two experiments. The first applies both constraints in a trajectory estimation task with a monocular camera and an Inertial Measurement Unit (IMU). Our methods achieve 30-70% less average trajectory error while running 25$\times$ and 6.2$\times$ faster than the popular Visual-Inertial Odometry methods VINS-Mono and ROVIO respectively. The second experiment demonstrates that when the constraints are used for feedback with efference copies the resulting closed loop system's eigenvalues are invariant to scaling of the applied control signal. We believe these results indicate the $\tau$ and $\Phi$ constraint's potential as the basis of robust and efficient algorithms for a multitude of robotic applications.


Why is it almost impossible to tickle yourself?

Daily Mail - Science & tech

Some of us are more ticklish than others, but nearly everyone is unable to tickle themselves. The answer is tied to how we see and how we perceive movement. To get to the bottom of why we can't tickle ourselves, let's first examine another phenomenon. Close one eye, and then carefully push against the side of your other (open) eye, moving the eyeball from side to side in its socket. It should appear as if the world is moving, even though you know it isn't.


Who's Talking? — Efference Copy and a Robot's Sense of Agency

Brody, Justin (Goucher College) | Perlis, Don (University of Maryland, College Park) | Shamwell, Jared (University of Maryland, College Park)

AAAI Conferences

How can a robot tell when it — rather than another agent — is making an utterance or performing an action? This is rather tricky and also very important for human-robot (or even robot-robot) interaction. Here we outline our beginning attempt to deal with this issue.