Goto

Collaborating Authors

Middlesex County


Exploring emerging topics in artificial intelligence policy

#artificialintelligence

Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies. The virtual event, hosted by the AI Policy Forum (AIPF) -- an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing -- brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility. In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries -- most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own. Each of these developments represents a different approach to legislating AI, but what makes a good AI law?


Becoming an 'AI Powerhouse' Means Going All In

#artificialintelligence

There are plenty of organizations that are dabbling with AI, but relatively few have decided to go all in on the technology. One that is decidedly on that path is Mastercard. Employing a combination of acquisitions and internal capabilities, Mastercard has the clear objective of becoming an AI powerhouse. Just what does that term mean, and how is it being applied at the company? Some refer to the idea of aggressive, pervasive adoption of AI as being "AI first." Others use the term "AI fueled" or "all in on AI" (that's Tom's favorite, since it's the title of his forthcoming book on the subject).


Efficient Deep Learning: From Theory to Practice

#artificialintelligence

Modern machine learning often relies on deep neural networks that are prohibitively expensive in terms of the memory and computational footprint. This in turn significantly inhibits the potential range of applications where we are faced with non-negligible resource constraints, e.g., real-time data processing, embedded devices, and robotics. In this thesis, we develop theoretically-grounded algorithms to reduce the size and inference cost of modern, large-scale neural networks. By taking a theoretical approach from first principles, we intend to understand and analytically describe the performance-size trade-offs of deep networks, i.e., the generalization properties. We then leverage such insights to devise practical algorithms for obtaining more efficient neural networks via pruning or compression. Beyond theoretical aspects and the inference time efficiency of neural networks, we study how compression can yield novel insights into the design and training of neural networks. We investigate the practical aspects of the generalization properties of pruned neural networks beyond simple metrics such as test accuracy. Finally, we show how in certain applications pruning neural networks can improve the training and hence the generalization performance.


Taking the guesswork out of dental care with artificial intelligence

#artificialintelligence

When you picture a hospital radiologist, you might think of a specialist who sits in a dark room and spends hours poring over X-rays to make diagnoses. Contrast that with your dentist, who in addition to interpreting X-rays must also perform surgery, manage staff, communicate with patients, and run their business. When dentists analyze X-rays, they do so in bright rooms and on computers that aren't specialized for radiology, often with the patient sitting right next to them. Is it any wonder, then, that dentists given the same X-ray might propose different treatments? "Dentists are doing a great job given all the things they have to deal with," says Wardah Inam SM '13, PhD '16.


Researchers release open-source photorealistic simulator for autonomous driving

#artificialintelligence

VISTA 2.0 builds off of the team's previous model, VISTA, and it's fundamentally different from existing AV simulators since it's data-driven -- meaning it was built and photorealistically rendered from real-world data -- thereby enabling direct transfer to reality. While the initial iteration supported only single car lane-following with one camera sensor, achieving high-fidelity data-driven simulation required rethinking the foundations of how different sensors and behavioral interactions can be synthesized. Enter VISTA 2.0: a data-driven system that can simulate complex sensor types and massively interactive scenarios and intersections at scale. With much less data than previous models, the team was able to train autonomous vehicles that could be substantially more robust than those trained on large amounts of real-world data. "This is a massive jump in capabilities of data-driven simulation for autonomous vehicles, as well as the increase of scale and ability to handle greater driving complexity," says Alexander Amini, CSAIL PhD student and co-lead author on two new papers, together with fellow PhD student Tsun-Hsuan Wang.


MIT engineers build LEGO-like AI chip - Electronic Products & Technology

#artificialintelligence

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don't have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device's internal chip -- like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. Now engineers at Massachusetts Institute of Technology (MIT) in Cambridge MA, have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip. The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip's layers to communicate optically.


Seeing the whole from some of the parts

#artificialintelligence

Upon looking at photographs and drawing on their past experiences, humans can often perceive depth in pictures that are, themselves, perfectly flat. However, getting computers to do the same thing has proved quite challenging. The problem is difficult for several reasons, one being that information is inevitably lost when a scene that takes place in three dimensions is reduced to a two-dimensional (2D) representation. There are some well-established strategies for recovering 3D information from multiple 2D images, but they each have some limitations. A new approach called "virtual correspondence," which was developed by researchers at MIT and other institutions, can get around some of these shortcomings and succeed in cases where conventional methodology falters.


Artificial neural networks model face processing in autism

#artificialintelligence

Many of us easily recognize emotions expressed in others' faces. A smile may mean happiness, while a frown may indicate anger. Autistic people often have a more difficult time with this task. But new research, published June 15 in The Journal of Neuroscience, sheds light on the inner workings of the brain to suggest an answer. And it does so using a tool that opens new pathways to modeling the computation in our heads: artificial intelligence.


Fast-Track Data Monetization With Strategic Data Assets

#artificialintelligence

For years, using more data to make better decisions has been the holy grail for global companies, and most of them aim to treat data as a strategic asset. But new research from the MIT Center for Information Systems Research (CISR) has found that future-ready companies have greater ambition regarding their data. These organizations strive to maximize their data monetization outcomes by pervasively improving processes to do things better, cheaper, and faster; wrapping products with analytics features and experiences; and selling new, innovative information solutions.1 To monetize data, companies must first transform it so that it can be reused and recombined to enable new value creation. The easier the reuse and recombination, the higher the data's liquidity, which we define as "the ease of data asset reuse and recombination."


On the road to cleaner, greener, and faster driving

#artificialintelligence

No one likes sitting at a red light. But signalized intersections aren't just a minor nuisance for drivers; vehicles consume fuel and emit greenhouse gases while waiting for the light to change. What if motorists could time their trips so they arrive at the intersection when the light is green? While that might be just a lucky break for a human driver, it could be achieved more consistently by an autonomous vehicle that uses artificial intelligence to control its speed. In a new study, MIT researchers demonstrate a machine-learning approach that can learn to control a fleet of autonomous vehicles as they approach and travel through a signalized intersection in a way that keeps traffic flowing smoothly.