Before computers, no sane person would have set out to count gender pronouns in 4,000 novels, but the results can be revealing, as MIT's new digital humanities program recently discovered. Launched with a $1.3 million grant from the Andrew W. Mellon Foundation, the Program in Digital Humanities brings computation together with humanities research, with the goal of building a community "fluent in both languages," says Michael Scott Cuthbert, associate professor of music, Music21 inventor, and director of digital humanities at MIT. "In the past, it has been somewhat rare, and extremely rare beyond MIT, for humanists to be fully equipped to frame questions in ways that are easy to put in computer science terms, and equally rare for computer scientists to be deeply educated in humanities research. There has been a communications gap," Cuthbert says. While traditional digital humanities programs attempt to provide humanities scholars with some computational skills, the situation at MIT is different: Most MIT students already have or are learning basic programming skills, and all MIT undergraduates also take some humanities classes. Cuthbert believes this difference will make MIT's program a great success.
Vivienne Sze, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), has received the 2018-2019 Harold E. Edgerton Faculty Achievement Award. The award, announced at today's MIT faculty meeting, commends Sze for "her seminal and highly regarded contributions in the critical areas of deep learning and low-power video coding, and for her educational successes and passion in championing women and under-represented minorities in her field." Sze's research involves the co-design of energy-aware signal processing algorithms and low-power circuit, architecture, and systems for a broad set of applications, including machine learning, computer vision, robotics, image processing, and video coding. She is currently working on projects focusing on autonomous navigation and embedded artificial intelligence (AI) for health-monitoring applications. "In the domain of deep learning, [Sze] created the Eyeriss chip for accelerating deep learning algorithms, building a flexible architecture to handle different convolutional shapes," the Edgerton Faculty Award selection committee said in announcing its decision.
The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand. Now, a team of scientists at MIT and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two. Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists scan a large number of papers to get a preliminary sense of what they're about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition. The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a senior scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.
A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously. Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory. Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate.
Computational neuroscientist Sarah Schwettmann is one of three instructors behind the cross-disciplinary course 9.S52/9.S916 (Vision in Art and Neuroscience), which introduces students to core concepts in visual perception through the lenses of art and neuroscience. Supported by a faculty grant from the Center for Art, Science and Technology at MIT (CAST) for the past two years, the class is led by Pawan Sinha, a professor of vision and computational neuroscience in the Department of Brain and Cognitive Sciences. They are joined in the course by Seth Riskin SM '89, a light artist and the manager of the MIT Museum Studio and Compton Gallery, where the course is taught. Schwettman discussed the combination of art and science in an educational setting. Q: How have the three of you approached this cross-disciplinary class in art and neuroscience?
Two MIT alumnae and three current MIT doctoral students are among this year's 30 recipients of The Paul and Daisy Soros Fellowships for New Americans. The five students -- Joseph Maalouf, Indira Puri, Grace Zhang, Helen Zhou, and Jonathan Zong -- will each receive up to $90,000 to fund their doctoral educations. The newest fellows were selected from a pool of 1,767 applications based on their potential to make significant contributions to U.S. society, culture, or their academic fields. The P.D. Soros Fellowships are open to all American immigrants and children of immigrants, including DACA recipients, refugees, and asylum seekers. In the past nine years, 34 MIT students and alumni have been awarded this fellowship.
After being involved in two serious car accidents, Chien-Chih "Ernie" Ho dedicated himself to a lifelong goal: Develop self-driving car technology that could save millions of lives. Ho had learned from watching a TED Talk that self-driving car technology could prevent accidents -- potentially saving 3 million lives each year -- furthering his interest in the burgeoning area. However, as a student at National Chengchi University, a top business school in Taiwan, he had limited access to resources in technology education. Though he taught himself programming, winning several international software competitions in the process, it was difficult to find opportunities in the self-driving car industries. "In Taiwan, few people and schools are involved in the autonomous vehicle industry" he says.
If you look under the hood of the internet, you'll find lots of gears churning along that make it all possible. For example, take a company like AT&T. They have to intimately understand what internet data are going where so that they can better accommodate different levels of usage. But it isn't practical to precisely monitor every packet of data, because companies simply don't have unlimited amounts of storage space. Because of this, tech companies use special algorithms to roughly estimate the amount of traffic heading to different IP addresses.
What goes into making plants taste good? For scientists in MIT's Media Lab, it takes a combination of botany, machine-learning algorithms, and some good old-fashioned chemistry. Using all of the above, researchers in the Media Lab's Open Agriculture Initiative report that they have created basil plants that are likely more delicious than any you have ever tasted. No genetic modification is involved: The researchers used computer algorithms to determine the optimal growing conditions to maximize the concentration of flavorful molecules known as volatile compounds. But that is just the beginning for the new field of "cyber agriculture," says Caleb Harper, a principal research scientist in MIT's Media Lab and director of the OpenAg group.
A child who has never seen a pink elephant can still describe one -- unlike a computer. "The computer learns from data," says Jiajun Wu, a PhD student at MIT. "The ability to generalize and recognize something you've never seen before -- a pink elephant -- is very hard for machines." Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa's latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits.