A team of researchers hailing from Harvard and Université de Montréal today launched Epitopes.world, It's built atop an algorithm -- CAMAP -- that generates predictions for potential vaccine targets, enabling researchers to identify which parts of the virus are more likely to be exposed at the surface (epitopes) of infected cells. Project lead Dr. Tariq Daouda, who worked alongside doctorates in machine learning, immunobiologists, and bioinformaticians to build Epitopes.world, Fewer than 12% of all drugs entering clinical trials end up in pharmacies, and it takes at least 10 years for medicines to complete the journey from discovery to the marketplace. Clinical trials alone take six to seven years, on average, putting the cost of R&D at roughly $2.6 billion, according to the Pharmaceutical Research and Manufacturers of America.
Open Philanthropy recommended a total of approximately $2,300,000 over five years in PhD fellowship support to 10 promising machine learning researchers that together represent the 2020 class of the Open Phil AI Fellowship.1 These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence. We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it's most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI. The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence.
Jean-Francois Gagne is the CEO of Elemental AI, a Montreal-based startup which develops artificial intelligence solutions for all kinds of businesses. Elemental operates in a tough market with serious competition from tech giants such as Amazon, Google, and Microsoft. Yet thanks to its innovative training technique, which harnesses simulated data, it has created a unique proposition that has attracted an impressive roster of blue chip customers. Here Jean-Francois calls for a re-think on the assumptions built into the economic equation of international trade as well explaining why he thinks people will collaborate with machines to create new value. Elemental is a partner of the Global AI Summit, an event which will be hosted by Tortoise on May 15th 2020, that will examine the future of the world and look in depth at the role technology, and AI in particular, will play in shaping it.
Alfredo joined Element AI as a Research Engineer in the AI for Good lab in London, working on applications that enable NGOs and non-profits. He is one of the primary co-authors of the first technical report made in partnership with Amnesty International, on the large-scale study of online abuse against women on Twitter from crowd-sourced data. He's been a Machine Learning mentor at NASA's Frontier Development Program, helping teams apply AI for scientific space problems. More recently, he led the joint-research with Mila Montreal on Multi-Frame Super-Resolution, which was awarded by the European Space Agency for their top performance on the PROBA-V Super-Resolution challenge. His research interests lie in computer vision for satellite imagery, probabilistic modeling, and AI for Social Good.
Self-supervised learning could lead to the creation of AI that's more human-like in its reasoning, according to Turing Award winners Yoshua Bengio and Yann LeCun. Bengio, director at the Montreal Institute for Learning Algorithms, and LeCun, Facebook VP and chief AI scientist, spoke candidly about this and other research trends during a session at the International Conference on Learning Representation (ICLR) 2020, which took place online. Supervised learning entails training an AI model on a labeled data set, and LeCun thinks it'll play a diminishing role as self-supervised learning comes into wider use. Instead of relying on annotations, self-supervised learning algorithms generate labels from data by exposing relationships among the data's parts, a step believed to be critical to achieving human-level intelligence. "Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It's basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way," said LeCun.
The Intelligent GeoSolutions (IGS) team at the University of Maine's Center for Research on Sustainable Forests (CRSF) has released a free interactive mapping tool, the Forest Ecosystem Status a … Trends (ForEST) app, to provide online decision support to private and public forest managers, natural resource agencies, conservation organizations and other stakeholders. With the current outbreak of eastern spruce budworm expanding south from Quebec, up-to-date information about resource conditions and near-term risk are needed to coordinate mitigation actions in response to the outbreak and related market conditions. The ForEST app is the culmination of three years of research and software development by the IGS team in partnership with UMaine's Advanced Computing Group. The interdisciplinary project supported two graduate students in the School of Computing and Information Science, each of whom served as lead developer, as well as undergraduate computer science students who worked as team programmers. The interactive web interface is designed to provide near real-time information about changing forest landscape conditions resulting from the spruce budworm outbreak and ongoing management.
In the development of artificial intelligence applications, the holy grail is the creation of an artificial neural network that functions like the human brain. This is an elusive goal, because the human brain is an extremely complex organ that functions in flexible and fluid ways that can be difficult to replicate in the world of AI. Today, a team of researchers from McGill University and the University of Montreal are making breakthroughs with functional magnetic resonance imaging (fMRI) of people's brains while carrying out various cognitive tasks. The goal is to develop better understand and create computational models of how the brain works, and then use those models to train artificial neural networks to map the images to actions quickly and accurately. This would be a big leap forward for the AI world, according to one of the lead researchers on the project, Dr. Pierre Bellec, an associate professor at the University of Montreal.
The AI engine supports a self-operating building that requires no human intervention. What inspired you to launch BrainBox AI? My journey into HVAC technology began while working on energy efficiency projects throughout North America and Europe. During this stage of my life, I dealt with the technology in a plethora of buildings. These were buildings of different sizes and purpose, anything from hotels all the way to data centers. It quickly became apparent to me that continuous commissioning approaches would generate consistent energy savings but would require extensive amounts of both financial and human capital.
IMAGE: Example of segmentation produced by the tool which separates the structures in cerebrospinal fluid (red), grey matter (blue) and white matter (yellow) from MRI images T2 (middle column) and T1... view more Canadian scientists have developed an innovative new technique that uses artificial intelligence to better define the different sections of the brain in newborns during a magnetic resonance imaging (MRI) exam. The results of this study -- a collaboration between researchers at Montreal's CHU Sainte-Justine children's hospital and the ÉTS engineering school -- are published today in Frontiers in Neuroscience. "This is one of the first times that artificial intelligence has been used to better define the different parts of a newborn's brain on an MRI: namely the grey matter, white matter and cerebrospinal fluid," said Dr. Gregory A. Lodygensky, a neonatologist at CHU Sainte-Justine and professor at Université de Montréal. "Until today, the tools available were complex, often intermingled and difficult to access," he added. In collaboration with Professor Jose Dolz, an expert in medical image analysis and machine learning at ÉTS, the researchers were able to adapt the tools to the specificities of the neonatal setting and then validate them.
McGill University researchers say they've developed a technique to train a remote-controlled, offroad car to drive on terrain from aerial and first-person imagery. Their hybrid approach accounts for terrain roughness and obstacles using on-board sensors, enabling it to generalize to environments with vegetation, rocks, and sandy trails. The work is preliminary, but it might hold promise for autonomous vehicle companies that rely chiefly on camera footage to train their navigational AI. U.K.-based Wayve is in that camp, as are Tesla, Mobileye, and Comma.ai. The researchers' work combines elements of model-free and model-based AI training methods into a single graph to leverage the strength of both while offsetting their weaknesses.