The Better World Tour comes home to Boston

MIT News

On Sept. 28, the Better World tour was back in MIT's own neighborhood, at the Boch Center Wang Theatre in downtown Boston. More than 1,000 MIT alumni and friends were in attendance to celebrate the MIT Campaign for a Better World, a galvanizing effort that has gathered momentum and participation since the its public launch in May 2016, at events around the world. Guests who might have thought that listening would be their only role in the program were in for a pleasant surprise. Eran Egozy '95, MNG '95, MIT professor of the practice in music technology, a cofounder of Harmonix Music Systems, and the creator of "Guitar Hero," kicked off the evening by inviting the audience to join him in a classic MIT experiment. Using a new music application called "Tutti" (Italian for "together") and audience members' cell phones, Egozy transformed the audience into an orchestra for a rendition of "Engineered Engineers," a composition created for the event by Evan Ziporyn, the Kenan Sahin Distinguished Professor and Music and Theater Arts chair.

What Data Science Can Tell Us About Our World


A daylong conference will cover a wide-range of topics related to computational data analysis, from how languages spread to ways of improving the value of crowdsourcing. The Data Science Workshop on Computational Social Science takes place Oct. 20. It's the first of what Dragomir Radev, the A. Bartlett Giamatti Professor of Computer Science, expects will be a regular event. "We decided we should try to cover different areas of data science," said Radev, one of the event's organizers. "We're starting with computational social science first and then switch to other areas in which data science and computer science have made an impact, for example, digital humanities, medicine, finance, etc." Radev said the event is something that likely would not have happened 10 or even five years ago.

AI news: CBI urges creation of AI commission in 2018


The CBI (Confederation of British Industry) is asking the government to launch an AI commission in 2018 to examine the effect of artificial intelligence on jobs. The CBI, an organisation that speaks on behalf of 190,000 businesses across the UK, has released a report titled'Disrupting the Future' which highlights how firms and the government must pave the way for the adoption of new technologies. It has called on the government to establish a joint commission in early 2018 involving business, employee representatives, academics and a minister to examine the impact of AI on people and jobs. It also hopes the commission will be able to set out an action plan to outline how to raise productivity, spread prosperity and open up new paths to economic growth. Josh Hardie, CBI deputy director-general, said: "The UK must lead the way in adopting these technologies but we must also prepare for their impacts.



Nature has a way of making complex shapes from a set of simple growth rules. The curve of a petal, the swoop of a branch, even the contours of our face are shaped by these processes. What if we could unlock those rules and reverse engineer nature's ability to grow an infinitely diverse array of shapes? Scientists from Harvard's Wyss Institute for Biologically Inspired Engineering and the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have done just that. In a paper published in the Proceedings of the National Academy of Sciences, the team demonstrates a technique to grow any target shape from any starting shape.

IoT, Drones And AI: How Technology Can Help In Disaster Situations


The spate of natural disasters which have hit the U.S. has demonstrated that there is much that needs to be done when it comes to rescuing efforts, preventive measures, and other associated issues. Many tech companies such as Google, Facebook and Microsoft have put in large-scale efforts in order to help people be safe during natural disasters, but the relevance of technology in such situations has been rapidly increasing. Technology can play a large role during natural disasters as has been demonstrated in Hurricanes Harvey and Irma. Experts from IEEE, the world's biggest technical professional organizations have in an email to the International Business Times, provided their opinions on the use of technology in natural disasters. Dr. Massoud Amin, IEEE fellow and Professor of Electrical and Computer Engineering at the University of Minnesota, provided IBT with his take on one of the most important aspects of technology during natural disasters, stating that,"It [technology] enables better proactive planning, prepositioning of the assets and assists with more real-time recovery and restoration.

How we determine who's to blame

MIT News

How do people assign a cause to events they witness? Some philosophers have suggested that people determine responsibility for a particular outcome by imagining what would have happened if a suspected cause had not intervened. This kind of reasoning, known as counterfactual simulation, is believed to occur in many situations. For example, soccer referees deciding whether a player should be credited with an "own goal" -- a goal accidentally scored for the opposing team -- must try to determine what would have happened had the player not touched the ball. This process can be conscious, as in the soccer example, or unconscious, so that we are not even aware we are doing it.

AI used to detect breast cancer


US scientists are using artificial intelligence to predict whether breast lesions identified from a biopsy will turn out to cancerous. The machine learning system has been tested on 335 high-risk lesions, and correctly diagnosed 97% as malignant. It reduced the number of unnecessary surgeries by more than 30%, the scientists said. One breast cancer specialist said that the research was "useful". The machine learning system was trained on information about such lesions, the system looks for patterns among a range of data points, such as demographics, family history, biopsies and pathology reports.

There's a huge opportunity in robotics for early career computer scientists and serious software engineers


There's a major roadblock to deeper market penetration of enterprise robotics, and a new generation of early career computer scientists and more seasoned software engineers may hold the answer. I recently had a chance to speak with Maya Cakmak, assistant professor at the University of Washington, Computer Science & Engineering Department, where she directs the Human-Centered Robotics Lab. To understand PbD, consider collaborative robots from companies like ABB and Kuka. The units consist of articulated arms that can be programmed to help workers do a variety of things, such as pick and place objects, test devices and components, and perform simple but precise manufacturing tasks. So-called "cobots" are relatively inexpensive and operate alongside humans, and many of the use cases involve small- to mid-sized businesses.

What would the average human do?


Last year, researchers at MIT set up a curious website called the Moral Machine, which peppered visitors with casually gruesome questions about what an autonomous vehicle should do if its brakes failed as it sped toward pedestrians in a crosswalk: whether it should mow down three joggers to spare two children, for instance, or veer into a concrete barrier to save a pedestrian who is elderly, or pregnant, or homeless, or a criminal. In each grisly permutation, the Moral Machine invited visitors to cast a vote about who the vehicle should kill. The project is a morbid riff on the "trolley problem," a thought experiment that forces participants to choose between letting a runaway train kill five people or diverting its path to kill one person who otherwise wouldn't die. But the Moral Machine gave the riddle a contemporary twist that got picked up by the New York Times, The Guardian and Scientific American and eventually collected some 18 million votes from 1.3 million would-be executioners. That unique cache of data about the ethical gut feelings of random people on the internet intrigued Ariel Procaccia, an assistant professor in the computer science department at Carnegie Mellon University, and he struck up a partnership with Iyad Rahwan, one of the MIT researchers behind the Moral Machine, as well as a team of other scientists at both institutions.

Deep learning reconstructs holograms


Deep learning has been experiencing a true renaissance especially over the last decade, and it uses multi-layered artificial neural networks for automated analysis of data. Deep learning is one of the most exciting forms of machine learning that is behind several recent leapfrog advances in technology including for example real-time speech recognition and translation as well image/video labeling and captioning, among many others. Especially in image analysis, deep learning shows significant promise for automated search and labeling of features of interest, such as abnormal regions in a medical image. Now, UCLA researchers have demonstrated a new use for deep learning – this time to reconstruct a hologram and form a microscopic image of an object. In a recent article that is published in Light: Science & Applications, a journal of the Springer Nature, UCLA researchers have demonstrated that a neural network can learn to perform phase recovery and holographic image reconstruction after appropriate training.