As a postdoc in the MIT Materials Systems Laboratory, Michele L. Bustamante works at the intersection of economics and materials science, creating models of supply and demand for raw materials important to high-tech, such as tellurium needed for thin-film solar cells and cobalt needed for lithium ion batteries. Her experience over the last two years helped Bustamante to win a one-year Congressional Science and Engineering Fellowship, which begins in September, from the Materials Research Society (MRS) and The Minerals, Metals and Materials Society (TMS). "I've worked with sponsors who are in the commodity industry, and they want to understand how are new things like renewable energy and electric and autonomous vehicles going to change demand for metals that they produce," Bustamante explains. During her time at MIT, she has presented work at various conferences including the 2017 Materials Research Society Fall Meeting and a recent Industrial Liaison Program Summit at MIT. She collaborated with Materials Systems Laboratory Director Richard Roth, Principal Research Scientist Randolph E. Kirchain, and MSL Faculty Director Joel P. Clark, professor emeritus of materials systems.
The Argonne Leadership Computing Facility (ALCF), a U.S. Department of Energy (DOE) Office of Science User Facility, has selected 10 data science and machine learning projects for its Aurora Early Science Program (ESP). Set to be the nation's first exascale system upon its expected 2021 arrival, Aurora will be capable of performing a quintillion calculations per second, making it 10 times more powerful than the fastest computer that currently exists. The Aurora ESP, which commenced with 10 simulation-based projects in 2017, is designed to prepare key applications, libraries, and infrastructure for the architecture and scale of the exascale supercomputer. Researchers in the Laboratory for Nuclear Science's Center for Theoretical Physics have been awarded funding for one of the projects under the ESP. Associate professor of physics William Detmold, assistant professor of physics Phiala Shanahan, and principal research scientist Andrew Pochinsky will use new techniques developed by the group, coupling novel machine learning approaches and state-of-the-art nuclear physics tools, to study the structure of nuclei.
Designing new molecules for pharmaceuticals is primarily a manual, time-consuming process that's prone to error. But MIT researchers have now taken a step toward fully automating the design process, which could drastically speed things up -- and produce better results. Drug discovery relies on lead optimization. In this process, chemists select a target ("lead") molecule with known potential to combat a specific disease, then tweak its chemical properties for higher potency and other factors. Often, chemists use expert knowledge and conduct manual tweaking of molecules, adding and subtracting functional groups -- atoms and bonds responsible for specific chemical reactions -- one by one.
Amateur and professional musicians alike may spend hours pouring over YouTube clips to figure out exactly how to play certain parts of their favorite songs. But what if there were a way to play a video and isolate the only instrument you wanted to hear? That's the outcome of a new AI project out of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL): a deep-learning system that can look at a video of a musical performance, and isolate the sounds of specific instruments and make them louder or softer. The system, which is "self-supervised," doesn't require any human annotations on what the instruments are or what they sound like. Trained on over 60 hours of videos, the "PixelPlayer" system can view a never-before-seen musical performance, identify specific instruments at pixel level, and extract the sounds that are associated with those instruments.
MIT's Cheetah 3 robot can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved, all while essentially blind. The 90-pound mechanical beast -- about the size of a full-grown Labrador -- is intentionally designed to do all this without relying on cameras or any external environmental sensors. Instead, it nimbly "feels" its way through its surroundings in a way that engineers describe as "blind locomotion," much like making one's way across a pitch-black room. "There are many unexpected behaviors the robot should be able to handle without relying too much on vision," says the robot's designer, Sangbae Kim, associate professor of mechanical engineering at MIT. "Vision can be noisy, slightly inaccurate, and sometimes not available, and if you rely too much on vision, your robot has to be very accurate in position and eventually will be slow. So we want the robot to rely more on tactile information.
You see the flour in the pantry, so you reach for it. You see the traffic light change to green, so you step on the gas. While the link between seeing and then moving in response is simple and essential to everyday existence, neuroscientists haven't been able to get beyond debating where the link is and how it's made. But in a new study in Nature Communications, a team from MIT's Picower Institute for Learning and Memory provides evidence that one crucial brain region called the posterior parietal cortex (PPC) plays an important role in converting vision into action. "Vision in the service of action begins with the eyes, but then that information has to be transformed into motor commands," says senior author Mriganka Sur, the Paul E. and Lilah Newton Professor of Neuroscience in the Department of Brain and Cognitive Sciences.
Four hungry MIT student-athletes were on a mission to find a filling, inexpensive meal and ended up creating the first robotic kitchen. "It was a natural solution to the problem of creating inexpensive, healthy food. We just wanted to figure out how to cook in a new way," says Kale Rogers '16, who co-founded Spyce -- a fast-casual eatery featuring a robotic kitchen -- with his friends, fraternity brothers, and fellow Course 2 (mechanical engineering) graduates Braden Knight '16, Luke Schlueter '16, and Michael Farid '14, SM '16. "We wanted to see if we could automate the process and make it as efficient as possible so we could get a meal right around $7.50 as opposed to $12." With their minds set on reworking the process, their background in engineering started to come into play.
"In some ways you can compare what we are trying to do to self-driving cars." MIT postdoc Cristina Rea is describing her work at the Institute's Plasma Science and Fusion Center (PSFC), where she is exploring ways to predict disruptions in the turbulent plasma that fuels fusion tokamak reactors.. "You want to be able to predict when an object presents an obstacle for your car," she continues with her comparison. "Likewise, in fusion devices, you need to be able to predict disruptions, with enough warning time so you can take actions to avoid a problem." Tokamaks use magnetic fields to contain hot plasma in a donut-shaped vacuum chamber long enough for fusion to occur. Chaotic and unpredictable, the plasma resists confinement, and disrupts.
If you get the chance to talk with MIT Lecturer Kyle Keane or his colleague Andrew Ringler about their course, RES.3-003 (Learn to Build Your Own Videogame with Unity Game Engine and Microsoft Kinect), you probably won't discuss technology much -- if at all. Instead, be prepared to hear terms like: behavioral modeling, emotional intelligence, vulnerability, and positive psychology. Keane, this year's recipient of the MIT School of Engineering Infinite Mile Award for Excellence and the James N. Murphy Award for inspired and dedicated service to students, delivers a rigorous project-based curriculum, where he and Ringler consciously employ positive education techniques to create a kinder, bolder, and more effective learning experience. In this nine-day hands-on workshop about scientific communication and public engagement, students learn to design, build, and publish video games using the Unity game engine. Students also gain experience in collaborative software development with GitHub, gesture-based human-computer interactions using Microsoft Kinect, automation and robotics using Arduino, as well as 3-D digital object creation, video game design, and small-team management.
Children with autism spectrum conditions often have trouble recognizing the emotional states of people around them -- distinguishing a happy face from a fearful face, for instance. To remedy this, some therapists use a kid-friendly robot to demonstrate those emotions and to engage the children in imitating the emotions and responding to them in appropriate ways. This type of therapy works best, however, if the robot can smoothly interpret the child's own behavior -- whether he or she is interested and excited or paying attention -- during the therapy. Researchers at the MIT Media Lab have now developed a type of personalized machine learning that helps robots estimate the engagement and interest of each child during these interactions, using data that are unique to that child. Armed with this personalized "deep learning" network, the robots' perception of the children's responses agreed with assessments by human experts, with a correlation score of 60 percent, the scientists report June 27 in Science Robotics.