At the Re-Work Deep Learning Summit in Boston today, a panel of ethicists and engineers discussed some of the biggest challenges facing artificial intelligence: algorithmic biases, ethics in AI, and whether the tools to create AI should be made widely available. The panel included Simon Mueller, cofounder and vice president of think tank The Future Society; Cansu Canca, founder and director of the AI Ethics Lab; Gabriele Fariello, a Harvard instructor in machine learning, researcher in neuroinformatics, and chief information officer at the University of Rhode Island; and Kathy Pham, a Google, IBM, and United States Digital Service alum who's currently researching ethics at the Artificial Intelligence and Software Engineering at Harvard Berkman Klein Center and MIT Media Lab. Mueller kicked off the discussion with a thorny question: Is ethics the most pressing problem for the progress of AI? "It's always an'engineering first and solve the tech problem first' attitude [when it comes to AI]," Pham said. "There are a lot of experts out there who have been thinking about this, [but] those voices need to be recognized as just as valuable as the engineers in the room." Canca agreed that ethics aren't discussed among product leads and designers as often as they should be.
The Explainable Machine Learning Challenge is a collaboration between Google, FICO and academics at Berkeley, Oxford, Imperial, UC Irvine and MIT, to generate new research in the area of algorithmic explainability. Teams will be challenged to create machine learning models with both high accuracy and explainability; they will use a real-world financial dataset provided by FICO. Designers and end users of machine learning algorithms will both benefit from more interpretable and explainable algorithms. Machine learning model designers will benefit from Model explanations, written explanations describing the functioning of a trained model. These might include information about which variables or examples are particularly important, they might explain the logic used by an algorithm, and/or characterize input/output relationships between variables and predictions.
A data-harvesting competition that offered football fans the chance to win £50m is at the centre of new questions about pro-Brexit campaigning before the 2016 EU referendum. Last week the select committee for digital, culture, media and sport released a letter Facebook sent to the Electoral Commission in which it said that two campaigns, Vote Leave and BeLeave, used three sets of data to target audiences, noting that they covered "the exact same audiences". The two campaign groups are under investigation over whether there was collusion and coordination during the referendum campaign, circumventing spending limits. Under British electoral law, it is illegal for campaigns to work together in any way unless they declare their spending jointly, which Vote Leave and BeLeave did not. Both organisations deny any collusion.
This work aims at corroborating the importance and efficacy of mutual learning in motor imagery (MI) brain–computer interface (BCI) by leveraging the insights obtained through our participation in the BCI race of the Cybathlon event. We hypothesized that, contrary to the popular trend of focusing mostly on the machine learning aspects of MI BCI training, a comprehensive mutual learning methodology that reinstates the three learning pillars (at the machine, subject, and application level) as equally significant could lead to a BCI–user symbiotic system able to succeed in real-world scenarios such as the Cybathlon event. Two severely impaired participants with chronic spinal cord injury (SCI), were trained following our mutual learning approach to control their avatar in a virtual BCI race game. The competition outcomes substantiate the effectiveness of this type of training. Most importantly, the present study is one among very few to provide multifaceted evidence on the efficacy of subject learning during BCI training.
Institute Professor Ann Graybiel, a professor in the Department of Brain and Cognitive Sciences and member of MIT's McGovern Institute for Brain Research, is being recognized by the Gruber Foundation for her work on the structure, organization, and function of the once-mysterious basal ganglia. She was awarded the prize alongside Okihide Hikosaka of the National Institute of Health's National Eye Institute and Wolfram Schultz of the University of Cambridge in the U.K. The basal ganglia have long been known to play a role in movement, and the work of Graybiel and others helped to extend their roles to cognition and emotion. Dysfunction in the basal ganglia has been linked to a host of disorders including Parkinson's disease, Huntington's disease, obsessive-compulsive disorder and attention-deficit hyperactivity disorder, and to depression and anxiety disorders. Graybiel's research focuses on the circuits thought to underlie these disorders, and on how these circuits act to help us form habits in everyday life. "We are delighted that Ann has been honored with the Gruber Neuroscience Prize," says Robert Desimone, director of the McGovern Institute.
Imagine unleashing the power of artificial intelligence to automate a critical component of biomedical research, expediting life-saving research in the treatment of almost every disease from rare disorders to the common cold. This could soon be a reality, thanks to the fourth Data Science Bowl, a 90-day competition in which, for the very first time, participants trained deep learning models to examine images of cells and identify nuclei, regardless of the experimental setup--and without human intervention. Algorithms developed in this competition could save researchers hundreds of thousands of hours of effort per year. This year, the competition brought together nearly 18,000 global participants, the most ever for the Data Science Bowl. Collectively, they submitted more than 68,000 algorithms and worked an estimated 288,000 hours to automate the vital, but time-consuming, process of nuclei detection.
On Thursday evening, the Johnson Ice Rink was transformed into a world of pure imagination. This year's Willy Wonka-themed competition featured a colorful game board based on the famous Chocolate Room and two equally colorful emcees: course instructors Sangbae Kim, donning a purple velvet coat as Willy Wonka, and Amos Winter, dressed as Charlie Bucket (complete with golden ticket). "We're all engineers; we can do better than pure imagination," said Winter as he kicked off the event. "We can do calculated imagination!" Thirty-two student finalists showed just how calculated their imagination could be in five sudden-death rounds throughout the event.
After 90 days and 288,000 working hours, the much-discussed fourth annual Data Science Bowl has ended. Run by Booz Allen Hamilton and Kaggle, the contest resulted in 68,000 algorithms, 3 winners, and one tantalizing opportunity for biomedical research. The goal of this year's Data Science Bowl was to build artificial intelligence (AI) systems that could automate what organizers called a "critical component of biomedical research." As such, 18,000 competitors spent months honing deep-learning models to scrutinize images of cells in search of nuclei, all without aid from humans. The ensuing algorithms are expected to salvage hundreds of thousands of hours each year, time that was previously burned by researchers who were forced to perform the task, according to the organizers.
The 3rd annual RobotArt competition is currently underway. Dozens of physical paintings, created by machines, will be judged by professional art critics and the public at-large to determine which team of developers will walk away with the top prize. What it is: RobotArt is the passion project of founder Andrew Conru. The competition runs each year and solicits roboticists and machine learning developers to create physical robot systems capable of painting with brushes and ink. Over the years, developers have used various different systems ranging from neural networks that operate robot arms, to software that translates human brush strokes in real-time to a robot which then attempts to imitate them.
The McGovern Institute for Brain Research at MIT announced today that David J. Anderson of Caltech is the winner of the 2018 Edward M. Scolnick Prize in Neuroscience. He was awarded the prize for his contributions to the isolation and characterization of neural stem cells and for his research on neural circuits that control emotional behaviors in animal models. The Scolnick Prize is awarded annually by the McGovern Institute to recognize outstanding advances in any field of neuroscience. "We congratulate David Anderson on being selected for this award," says Robert Desimone, director of the McGovern Institute and chair of the selection committee. "His work has provided fundamental insights into neural development and the structure and function of neural circuits."