Folks from the Massachusetts Institute of Technology (MIT) have developed a new machine learning-based tool that will tell you how fast a code can run on various chips. This will help developers tune their applications for specific processor architectures. Traditionally, developers used the performance model of compilers through a simulation to run basic blocks -- fundamental computer instruction at the machine level -- of code in order to gauge the performance of a chip. However, these performance models are not often validated through real-life processor performance. MIT researchers developed an AI model called Ithmel by training it to predict how fast a chip can run unknown basic blocks.
Industrial robots and warehouse automation are lucrative intermingling markets, and one needn't look further for evidence than Lexington, Massachusetts-based Berkshire Grey. The company, which combines AI and robotics to automate omnichannel fulfillment for retailers, ecommerce, and logistics enterprises, today announced that it secured a mammoth $263 million in series B funding financing from SoftBank with participation from Khosla Ventures, New Enterprise Associates, and Canaan. CEO Tom Wagner says the fresh capital will fuel the startup's global expansion, acquisitions, and team growth. "With our intelligent robotic automation, our clients see faster and more efficient supply chain operations that enable them to address the wants of today's savvy consumer," said Wagner. Berkshire Grey develops AI-imbued, cloud-hosted software that leverages a custom framework to achieve continuous improvement.
When science and technology meet social and economic systems, you tend to see something akin to what the late Stephen Jay Gould called "punctuated equilibrium" in his description of evolutionary biology. Something that has been stable for a long period is suddenly disrupted radically--and then settles into a new equilibrium.1 1.See Stephen Jay Gould, Punctuated Equilibrium, Cambridge, MA: Harvard University Press, 2007. Gould pointed out that fossil records show that species change does not advance gradually but often massively and disruptively. After the mass extinctions that have occurred several times across evolutionary eras, a minority of species survived and the voids in the ecosystem rapidly filled with massive speciation. Gould's theory addresses the discontinuity in fossil records that puzzled Charles Darwin.
General Assembly is 2 blocks from the MBTA Red line South Station at 125 Summer St, Boston, MA (at intersection with High St - map here). Show ID at the security desk and come to the 13th floor. There are parking meters along Atlantic Ave and other area streets, but we recommend parking on-site, at the "125 Summer St Garage" at 28 Lincoln St, for $12 after 5 pm.
Artificial intelligence (AI) represents a powerful but double-edged sword as nations confront global warming, poverty and issues of peace and justice. An international team of scientists this week released a first-ever study of how AI can help--as well as hinder--sustainable development worldwide. Published today in Nature Communications, the analysis focuses on how AI impacts the 17 goals for sustainable development adopted by the United Nations in 2015. The study was co-authored by a diverse group of researchers led by Ricardo Vinuesa and Francesco Fuso Nerini, assistant professors at KTH Royal Institute of Technology. They were joined by Max Tegmark, professor at Massachusetts Institute of Technology (MIT) and author of the bestselling book Life 3.0, as well as Virginia Dignum, professor of AI Ethics at Umeå University, among other authors.
A recent report from the MIT Work of the Future Task Force finds that companies are still in the "early stages of adoption" when it comes to incorporating new technology into their workflows, while a 2018 Pew Research Center study showed that 65-90% of surveyed people think human-held jobs will be replaced by robots and computers. When and how future workplaces will ultimately change remain unanswered, but Daniel Huttenlocher, inaugural dean of the MIT Stephen A. Schwarzman College of Computing, has some ideas. He spoke Dec. 2 at the MIT Technology Review Future Compute event in Cambridge, Massachusetts, and discussed the future of machines and the digital workforce. "I think it's very hard to predict the future and particularly hard to predict the positive outcomes of the future," said Huttenlocher, PhD '88. "It's a lot easier to see a technology and say'Gee, that looks like it's going to pose a risk for a particular form of employment' … than to envision some whole new type of work that is very hard to see because of the way that the technology is going to change."
Ensemble learning is a standard approach to building machine learning systems that capture complex phenomena in real-world data. An important aspect of these systems is the complete and valid quantification of model uncertainty. We introduce a Bayesian nonparametric ensemble (BNE) approach that augments an existing ensemble model to account for different sources of model uncertainty. It has a theoretical guarantee in that it robustly estimates the uncertainty patterns in the data distribution, and can decompose its overall predictive uncertainty into distinct components that are due to different sources of noise and error. We show that our method achieves accurate uncertainty estimates under complex observational noise, and illustrate its real-world utility in terms of uncertainty decomposition and model bias detection for an ensemble in predict air pollution exposures in Eastern Massachusetts, USA.
In the age of Alexa, YouTube recommendations, and Spotify playlists, artificial intelligence has become a way of life, improving marketing and advertising, e-commerce, and more. But what are the ethical implications of technology that collects and learns personal information? How should society navigate these issues and shape the future? A new curriculum designed for middle school students aims to help them understand just that at an early age, as they grow up surrounded by the technology. The open-source educational material, designed by an MIT team and piloted at this year's Massachusetts STEM Week this past fall, teaches students how AI systems are designed, how they can be used to influence the public -- and also how to use them to be successful in jobs of the future.
The 2019 NFL season quickly evolved into the Lamar Jackson show, every week delivering a different story, usually involving a highlight touchdown, a gaudy stat line, or a charming news conference. One story, however, was different: following a San Francisco 49ers loss at the hands of Jackson's Baltimore Ravens on Dec. 1, Tim Ryan, the radio color analyst for the 49ers, suggested that Jackson was successful in part because his dark skin helped him disguise a dark football. The public backlash was swift and loud, even if the fallout was mild (Ryan was suspended for one game). Instead of an honest conversation about why we talk about certain athletes using racialized language, the sports world settled for an apology and the next news story in the cycle. It is society's inability to adequately address issues of race and bias that motivated Mohit Iyyer, an assistant professor of computer science at the University of Massachusetts Amherst, to apply artificial intelligence and "big data" analytics toward answering a central question: Do sports commentators demonstrate bias in how they discuss athletes from different racial backgrounds?
Folks at the Massachusetts Institute of Technology (MIT) have developed a new machine learning-based tool that will tell you how fast a code can run on various chips. This will help developers tune their applications for specific processor architectures. From a report: Traditionally, developers used the performance model of compilers through a simulation to run basic blocks -- fundamental computer instruction at the machine level -- of code in order to gauge the performance of a chip. However, these performance models are not often validated through real-life processor performance. MIT researchers developed an AI model called Ithmel by training it to predict how fast a chip can run unknown basic blocks.