Estimate the Pulling Force of Boston Dynamics' Robo-Dog Army

WIRED

When Boston Dynamics shares a new robot video, my robophobia levels increase just a little bit. There is something about these robots that get into the uncanny valley for me. This particular video is both fascinating and disturbing. It's fascinating because here are a bunch of robots pulling a truck (not a pickup truck--a real truck). It's disturbing because it shows a BUNCH of robots.


MIT Program in Digital Humanities launches with $1.3 million Mellon Foundation grant

MIT News

Before computers, no sane person would have set out to count gender pronouns in 4,000 novels, but the results can be revealing, as MIT's new digital humanities program recently discovered. Launched with a $1.3 million grant from the Andrew W. Mellon Foundation, the Program in Digital Humanities brings computation together with humanities research, with the goal of building a community "fluent in both languages," says Michael Scott Cuthbert, associate professor of music, Music21 inventor, and director of digital humanities at MIT. "In the past, it has been somewhat rare, and extremely rare beyond MIT, for humanists to be fully equipped to frame questions in ways that are easy to put in computer science terms, and equally rare for computer scientists to be deeply educated in humanities research. There has been a communications gap," Cuthbert says. While traditional digital humanities programs attempt to provide humanities scholars with some computational skills, the situation at MIT is different: Most MIT students already have or are learning basic programming skills, and all MIT undergraduates also take some humanities classes. Cuthbert believes this difference will make MIT's program a great success.


Vivienne Sze wins Edgerton Faculty Award

MIT News

Vivienne Sze, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), has received the 2018-2019 Harold E. Edgerton Faculty Achievement Award. The award, announced at today's MIT faculty meeting, commends Sze for "her seminal and highly regarded contributions in the critical areas of deep learning and low-power video coding, and for her educational successes and passion in championing women and under-represented minorities in her field." Sze's research involves the co-design of energy-aware signal processing algorithms and low-power circuit, architecture, and systems for a broad set of applications, including machine learning, computer vision, robotics, image processing, and video coding. She is currently working on projects focusing on autonomous navigation and embedded artificial intelligence (AI) for health-monitoring applications. "In the domain of deep learning, [Sze] created the Eyeriss chip for accelerating deep learning algorithms, building a flexible architecture to handle different convolutional shapes," the Edgerton Faculty Award selection committee said in announcing its decision.


Can science writing be automated?

MIT News

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.


Facebook is reportedly working on an Alexa rival

Mashable

But the company is now working on its own digital assistant, according to a new report from CNBC. It's not clear exactly how the assistant will work or what it will be called, though CNBC reports it could be integrated with Facebook's Oculus virtual reality headsets or with the company's Portal speakers. Right now, Portal relies on Alexa for assistant functionality, though you can control speaker functions like volume by saying "hey Portal." Facebook doesn't have an AI assistant of its own, however, despite longstanding rumors about its ambitions in the space. The latest project is reportedly being led by Ira Snyder, who works in Facebook's Reality Labs.


PhD student in machine learning and computational biology University of Helsinki

#artificialintelligence

The Institute for Molecular Medicine Finland (FIMM) is an international research unit focusing on human genomics and personalised medicine at the Helsinki Institute of Life Science (HiLIFE) of the University of Helsinki - a leading Nordic university with a strong commitment to life science research. FIMM is part of the Nordic EMBL Partnership for Molecular Medicine, composed of the European Molecular Biology Laboratory (EMBL) and the centres for molecular medicine in Norway, Sweden and Denmark, and the EU-LIFE Community. A PhD student position is available in the research group of FIMM-EMBL Group Leader Dr. Esa Pitkänen at the Institute of Molecular Medicine Finland (FIMM), University of Helsinki. The research group will start at FIMM in July 2019, and will address data integration, analysis and interpretation challenges stemming from massive-scale data generated in clinical and research settings. We will work closely with interdisciplinary collaborators at University of Helsinki, Helsinki University Hospital, EMBL and German Cancer Research Center.


Learn about the Types of Machine Learning Algorithms

#artificialintelligence

Isn't it true that we are living in a digitalized world that has eliminated tons of human work by positioning automation?. In fact, it is the most defined period as Google's self-driving car has been invented. But, this period is not in its final stages instead is multiplying to create many more awesome things to surface in the near future. The most exciting concept that sits beside all these major transformations is Machine Learning, which is nothing but allowing computers to learn on their own to arrive at useful insights. Supervised learning is similar to a teacher teaching his students with examples and after sufficient practice, the teacher stops supervising and let the students derive at their own solution.


A Deep Dive into Deep Learning

#artificialintelligence

On Wednesday, March 27, the 2018 Turing Award in computing was given to Yoshua Bengio, Geoffrey Hinton and Yann LeCun for their work on deep learning. Deep learning by complex neural networks lies behind the applications that are finally bringing artificial intelligence out of the realm of science fiction into reality. Voice recognition allows you to talk to your robot devices. Image recognition is the key to self-driving cars. But what, exactly, is deep learning?


A Gentle Introduction to Convolutional Layers for Deep Learning Neural Networks

#artificialintelligence

Convolution and the convolutional layer are the major building blocks used in convolutional neural networks. A convolution is the simple application of a filter to an input that results in an activation. Repeated application of the same filter to an input results in a map of activations called a feature map, indicating the locations and strength of a detected feature in an input, such as an image. The innovation of convolutional neural networks is the ability to automatically learn a large number of filters in parallel specific to a training dataset under the constraints of a specific predictive modeling problem, such as image classification. The result is highly specific features that can be detected anywhere on input images. In this tutorial, you will discover how convolutions work in the convolutional neural network.


Introduction to Deep Q-Learning for Reinforcement Learning (in Python)

#artificialintelligence

I have always been fascinated with games. The seemingly infinite options available to perform an action under a tight timeline – it's a thrilling experience. So when I read about the incredible algorithms DeepMind was coming up with (like AlphaGo and AlphaStar), I was hooked. I wanted to learn how to make these systems on my own machine. And that led me into the world of deep reinforcement learning (Deep RL).