Feel Me

The New Yorker

On a bitter, soul-shivering, damp, biting gray February day in Cleveland--that is to say, on a February day in Cleveland--a handless man is handling a nonexistent ball. Igor Spetic lost his right hand when his forearm was pulped in an industrial accident six years ago and had to be amputated. In an operation four years ago, a team of surgeons implanted a set of small translucent "interfaces" into the neural circuits of his upper arm. This afternoon, in a basement lab at a Veterans Administration hospital, the wires are hooked up directly to a prosthetic hand--plastic, flesh-colored, five-fingered, and articulated--that is affixed to what remains of his arm. The hand has more than a dozen pressure sensors within it, and their signals can be transformed by a computer into electric waves like those natural to the nervous system. Since, from the brain's point of view, his hand is still there, it needs only to be recalled to life. With the "stimulation" turned on--the electronic feed coursing from the sensors--Spetic feels nineteen distinct sensations in his artificial hand. Above all, he can feel pressure as he would with a living hand. "We don't appreciate how much of our behavior is governed by our intense sensitivity to pressure," Dustin Tyler, the fresh-faced principal investigator on the Cleveland project, says, observing Spetic closely. "We think of hot and cold, or of textures, silk and cotton. But some of the most important sensing we do with our fingers is to register incredibly minute differences in pressure, of the kinds that are necessary to perform tasks, which we grasp in a microsecond from the feel of the outer shell of the thing. We know instantly, just by touching, whether to gently squeeze the toothpaste or crush the can." With the new prosthesis, Spetic can sense the surface of a cherry in a way that allows him to stem it effortlessly and precisely, guided by what he feels, rather than by what he sees. Prosthetic hands like Spetic's tend to be super-strong, capable of forty pounds of pressure, so the risk of crushing an egg is real. The stimulation sensors make delicate tasks easy. Spetic comes into the lab every other week; the rest of the time he is busy pursuing a degree in engineering, which he has taken up while on disability.

Joint Embedding of Graphs

arXiv.org Machine Learning

Feature extraction and dimension reduction for networks is critical in a wide variety of domains. Efficiently and accurately learning features for multiple graphs has important applications in statistical inference on graphs. We propose a method to jointly embed multiple undirected graphs. Given a set of graphs, the joint embedding method identifies a linear subspace spanned by rank one symmetric matrices and projects adjacency matrices of graphs into this subspace. The projection coefficients can be treated as features of the graphs. We also propose a random graph model which generalizes classical random graph model and can be used to model multiple graphs. We show through theory and numerical experiments that under the model, the joint embedding method produces estimates of parameters with small errors. Via simulation experiments, we demonstrate that the joint embedding method produces features which lead to state of the art performance in classifying graphs. Applying the joint embedding method to human brain graphs, we find it extract interpretable features that can be used to predict individual composite creativity index.



Recitations from Tel-Aviv University introductory course to computer science, assembled as IPython notebooks by Yoav Ram. Exploratory Computing with Python, a set of 15 Notebooks that cover exploratory computing, data analysis, and visualization. No prior programming knowledge required. Each Notebook includes a number of exercises (with answers) that should take less than 4 hours to complete. Developed by Mark Bakker for undergraduate engineering students at the Delft University of Technology.

Why AI is about to make some of the highest-paid doctors obsolete - TechRepublic


Radiologists bring home $395,000 each year, on average. In the near future, however, those numbers promise to drop to $0. Don't blame Obamacare, however, or even Trumpcare (whatever that turns out to be), but rather blame the rise of machine learning and its applicability to these two areas of medicine that are heavily focused on pattern matching, a job better done by a machine than a human. This is the argument put forward by Dr. Ziad Obermeyer of Harvard Medical School and Brigham and Women's Hospital and Ezekiel Emanuel, PhD, of the University of Pennsylvania, in an article for the New England Journal of Medicine, one of the medical profession's most prestigious journals. Machine learning will produce big winners and losers in healthcare, according to the authors, with radiologists and pathologists among the biggest losers.