Goto

Collaborating Authors

 ctrl-lab




emg2pose: A Large and Diverse Benchmark for Surface Electromyographic Hand Pose Estimation

Salter, Sasha, Warren, Richard, Schlager, Collin, Spurr, Adrian, Han, Shangchen, Bhasin, Rohin, Cai, Yujun, Walkington, Peter, Bolarinwa, Anuoluwapo, Wang, Robert, Danielson, Nathan, Merel, Josh, Pnevmatikakis, Eftychios, Marshall, Jesse

arXiv.org Artificial Intelligence

Hands are the primary means through which humans interact with the world. Reliable and always-available hand pose inference could yield new and intuitive control schemes for human-computer interactions, particularly in virtual and augmented reality. Computer vision is effective but requires one or multiple cameras and can struggle with occlusions, limited field of view, and poor lighting. Wearable wrist-based surface electromyography (sEMG) presents a promising alternative as an always-available modality sensing muscle activities that drive hand motion. However, sEMG signals are strongly dependent on user anatomy and sensor placement, and existing sEMG models have required hundreds of users and device placements to effectively generalize. To facilitate progress on sEMG pose inference, we introduce the emg2pose benchmark, the largest publicly available dataset of high-quality hand pose labels and wrist sEMG recordings. emg2pose contains 2kHz, 16 channel sEMG and pose labels from a 26-camera motion capture rig for 193 users, 370 hours, and 29 stages with diverse gestures - a scale comparable to vision-based hand pose datasets. We provide competitive baselines and challenging tasks evaluating real-world generalization scenarios: held-out users, sensor placements, and stages. emg2pose provides the machine learning community a platform for exploring complex generalization problems, holding potential to significantly enhance the development of sEMG-based human-computer interactions.


How the 'big 5' bolstered their AI through acquisitions in 2019

#artificialintelligence

The AI talent grab is real. This year alone, Pinterest CTO Vanja Josifovski jumped ship to Airbnb, while Pinterest hired Walmart CTO Jeremy King to head up its engineering team. Moreover, all the big tech companies, including Google and Apple, have for some time been vacuuming up AI talent through acquisitions -- a recent CB Insights report noted 635 AI acquisitions since 2010, topped by Apple with 20 acquisitions. Elsewhere, Microsoft turned to online education platforms to help train a new generation of AI students. But while the AI talent pool may be growing, a significant shortage remains.


Human-like machines and machine-like humans are the future of A.I.

#artificialintelligence

These days it seems that nearly every product and startup boasts some kind of A.I. capability, but when it comes to advancing this domain beyond simplistic machine learning technologists at MIT Technology Review's Future Compute conference say these A.I. will need to be more human than not. When discussing A.I. during the conference's first day on December 2nd, speakers focused on two distinct paths for this technology: more human-like A.I.'s as well as more computer-like humans. This dual approach was presented as a potential future for human-machine symbiosis. But what exactly does that all mean, and is it even a good thing? A research Scientist from Oak Ridge National Laboratory, Catherine Schuman began the conversation by presenting her work on neuromorphic computing.


Ctrl-labs CEO: We'll have neural interfaces in less than 5 years

#artificialintelligence

It can be a bit difficult to wrap your brain around what exactly neural interface startup Ctrl-labs is doing with technology. That's ironic, given that Ctrl-labs wants to let your brain directly use technology by translating mental intent into action. We caught up with Ctrl-labs CEO Thomas Reardon at Web Summit 2019 earlier this month to understand exactly how the brain-machine interface works. Founded in 2015, Ctrl-labs is a New York-based startup developing a wristband that translates musculoneural signals into machine-interpretable commands. But not for long -- Facebook acquired Ctrl-labs in September 2019. The acquisition hasn't closed yet, so Reardon has not spoken to anyone at the social media giant since signing the agreement. He was, however, eager to tell us more about the neural interface technology so we could glean why Facebook (and the tech industry at large) is interested. In short, Ctrl-labs wants us to interact with technology not via a mouse, a keyboard, a touchscreen, our voice, or any other input we've adopted. Reardon and his team expect that in a few years we will be able to use individual neurons -- not thoughts -- to directly control technology. Reardon has said many times that his company is tackling the "mother of all machine learning problems."


How Facebook's brain-machine interface measures up

#artificialintelligence

Somewhat unceremoniously, Facebook this week provided an update on its brain-computer interface project, preliminary plans for which it unveiled at its F8 developer conference in 2017. In a paper published in the journal Nature Communications, a team of scientists at the University of California, San Francisco backed by Facebook Reality Labs -- Facebook's Pittsburgh-based division devoted to augmented reality and virtual reality R&D -- described a prototypical system capable of reading and decoding study subjects' brain activity while they speak. It's impressive no matter how you slice it: The researchers managed to make out full, spoken words and phrases in real time. Study participants (who were prepping for epilepsy surgery) had a patch of electrodes placed on the surface of their brains, which employed a technique called electrocorticography (ECoG) -- the direct recording of electrical potentials associated with activity from the cerebral cortex -- to derive rich insights. A set of machine learning algorithms equipped with phonological speech models learned to decode specific speech sounds from the data and to distinguish between questions and responses.


CTRL-Labs' EEG wristbands may spell the end for keyboards and mice

Engadget

From the earliest days of punch cards, interacting with computers has always been a pain. Whether it's a keyboard and mouse, joystick or controller, getting the thoughts out of our heads and into the machine requires numerous, unintuitive processes. But until we start implanting USB ports into our brains and downloading our thoughts directly, we'll have to make do with the neural signal-detecting wristbands being developed by CTRL-Labs. "When your brain wants to go and effect something in these virtual spaces, your brain has to send a signal to your muscle, which has to move your hand, which has to move the device, which has to get picked up by the system, and turned into some sort of action there," Mike Astolfi, head of interactive experiences at CTRL-Labs, explained to Engadget. "And we think we can remove not only the mouse or the controller from that equation, but also, almost your hand from the equation."


Mind Control Isn't Sci-Fi Anymore

WIRED

Thomas Reardon puts a terrycloth stretch band with microchips and electrodes woven into the fabric--a steampunk version of jewelry--on each of his forearms. "This demo is a mind fuck," says Reardon, who prefers to be called by his surname only. He sits down at a computer keyboard, fires up his monitor, and begins typing. After a few lines of text, he pushes the keyboard away, exposing the white surface of a conference table in the midtown Manhattan headquarters of his startup. Only this time he is typing on…nothing. Yet the result is the same: The words he taps out appear on the monitor. Steven Levy is Backchannel's founder and Editor in Chief.