The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity's place in an algorithm-driven world, today published the inaugural edition of its State of AI Ethics report. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter. The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they're likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings. "Demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data," the report reads.
Yoshua Bengio, Founder of Mila and computer science professor at University of Montreal, will support the ongoing research of Perceiv AI in precision medicine to improve and optimize drug development clinical trials. Founded by graduate students out of University of Montreal and Mila, Perceiv AI aims to improve treatment efficacy thanks to refined patient selection. Through advanced Machine Learning algorithms, Perceiv AI helps pharmaceutical companies with more efficient and accurate subject stratification for their clinical trials. Heterogeneity in patient populations creates challenges in enrolment for clinical trials, which can result in increased trial costs and failures, delaying the commercialization of much-needed treatments. "For having seen the ravages of diseases like Alzheimer's from up close, I am very motivated to see more development of AI techniques, such as done at Perceiv AI, to provide better targeted treatments, and I am delighted to see the next generation of AI researchers embarking on such projects of important value for society while contributing to grow the startup ecosystem in Montreal," said Yoshua Bengio, Ph.D. "We are thrilled to reinforce our relationship with Mila and to welcome Yoshua as an advisor!" said Christian Dansereau, Ph.D., CEO and co-founder of Perceiv AI. "With their help, we will be able to leverage the most recent advances in Representation Learning to further refine our prognostic biomarkers, not only for Alzheimer's but also for new therapeutic areas."
Headquartered in Quebec, Canada, Sinopé Technologies might be less familiar than some other smart-home brands, but we've reviewed several of the company's products and found a lot to like. Today, we'll take a look at its new Zigbee smart dimmer switch (model DM2500ZB), and its new smart home hub, the GT130 gateway. These two products are part of a new family of Zigbee 3.0-based products that Sinopé launched this spring, a collection that also includes a smart in-wall outlet, a smart switch, and updated smart thermostats. With its embrace of the Zigbee smart home protocol, Sinopé is relegating its proprietary smart home family--the Mi-Wi series, which included the GT125 gateway--to "legacy" status. Now when you buy a DIY Sinopé smart home product, you can control it with the company's own GT130 gateway or with any other Zigbee-based smart home hub, including Samsung SmartThings or any of the Amazon Echo smart speakers and displays that are equipped with Zigbee radios.
In computer vision, one key property we expect of an intelligent artificial model, agent, or algorithm is that it should be able to correctly recognize the type, or class, of objects it encounters. This is critical in numerous important real-world scenarios--from biomedicine, where an intelligent system might be tasked with distinguishing between cancerous cells and healthy ones, to self-driving cars, where being able to discriminate between pedestrians, other vehicles, and road signs is crucial to successfully and safely navigating roads. Deep learning is one of the most significant tools for state-of-the-art systems in computer vision, and its use has resulted in models that have reached or can even exceed human-level performance in important and challenging real-world image classification tasks. Despite their successes, these models still have difficulty generalizing, or adapting to tasks in testing or deployment scenarios that don't closely resemble the tasks they were trained on. For example, a visual system trained under typical weather conditions in Northern California may fail to properly recognize pedestrians in Quebec because of differences in weather, clothes, demographics, and other features.
India on Monday became a founding member of an Artificial Intelligence (AI)-driven global body called the "Global Partnership on Artificial Intelligence (GPAI)" which aims to promote responsible and human centric-development of AI. Other countries involved include the US, UK, Australia, Canada, France, Germany, Italy, Japan, Mexico, New Zealand, South Korea and Singapore. GPAI will bring together leading experts from industry, civil society, governments and academia to collaborate on ways to show how AI can be leveraged to better respond to the present global crisis around COVID-19. The body will be supported by a Secretariat, to be hosted by the Organization for Economic Cooperation and Development (OECD) in Paris, as well as by two Centers of Expertise in Montreal and Paris. The news comes after India recently launched its National AI Strategy and National AI Portal that revolve around leveraging AI across education, agriculture, healthcare, e-commerce, finance, telecommunications and other such sectors.
A team of researchers hailing from Harvard and Université de Montréal today launched Epitopes.world, It's built atop an algorithm -- CAMAP -- that generates predictions for potential vaccine targets, enabling researchers to identify which parts of the virus are more likely to be exposed at the surface (epitopes) of infected cells. Project lead Dr. Tariq Daouda, who worked alongside doctorates in machine learning, immunobiologists, and bioinformaticians to build Epitopes.world, Fewer than 12% of all drugs entering clinical trials end up in pharmacies, and it takes at least 10 years for medicines to complete the journey from discovery to the marketplace. Clinical trials alone take six to seven years, on average, putting the cost of R&D at roughly $2.6 billion, according to the Pharmaceutical Research and Manufacturers of America.
Open Philanthropy recommended a total of approximately $2,300,000 over five years in PhD fellowship support to 10 promising machine learning researchers that together represent the 2020 class of the Open Phil AI Fellowship.1 These fellows were selected from more than 380 applicants for their academic excellence, technical knowledge, careful reasoning, and interest in making the long-term, large-scale impacts of AI a central focus of their research. This falls within our focus area of potential risks from advanced artificial intelligence. We believe that progress in artificial intelligence may eventually lead to changes in human civilization that are as large as the agricultural or industrial revolutions; while we think it's most likely that this would lead to significant improvements in human well-being, we also see significant risks. Open Phil AI Fellows have a broad mandate to think through which kinds of research are likely to be most valuable, to share ideas and form a community with like-minded students and professors, and ultimately to act in the way that they think is most likely to improve outcomes from progress in AI. The intent of the Open Phil AI Fellowship is both to support a small group of promising researchers and to foster a community with a culture of trust, debate, excitement, and intellectual excellence.
Jean-Francois Gagne is the CEO of Elemental AI, a Montreal-based startup which develops artificial intelligence solutions for all kinds of businesses. Elemental operates in a tough market with serious competition from tech giants such as Amazon, Google, and Microsoft. Yet thanks to its innovative training technique, which harnesses simulated data, it has created a unique proposition that has attracted an impressive roster of blue chip customers. Here Jean-Francois calls for a re-think on the assumptions built into the economic equation of international trade as well explaining why he thinks people will collaborate with machines to create new value. Elemental is a partner of the Global AI Summit, an event which will be hosted by Tortoise on May 15th 2020, that will examine the future of the world and look in depth at the role technology, and AI in particular, will play in shaping it.
Alfredo joined Element AI as a Research Engineer in the AI for Good lab in London, working on applications that enable NGOs and non-profits. He is one of the primary co-authors of the first technical report made in partnership with Amnesty International, on the large-scale study of online abuse against women on Twitter from crowd-sourced data. He's been a Machine Learning mentor at NASA's Frontier Development Program, helping teams apply AI for scientific space problems. More recently, he led the joint-research with Mila Montreal on Multi-Frame Super-Resolution, which was awarded by the European Space Agency for their top performance on the PROBA-V Super-Resolution challenge. His research interests lie in computer vision for satellite imagery, probabilistic modeling, and AI for Social Good.
Self-supervised learning could lead to the creation of AI that's more human-like in its reasoning, according to Turing Award winners Yoshua Bengio and Yann LeCun. Bengio, director at the Montreal Institute for Learning Algorithms, and LeCun, Facebook VP and chief AI scientist, spoke candidly about this and other research trends during a session at the International Conference on Learning Representation (ICLR) 2020, which took place online. Supervised learning entails training an AI model on a labeled data set, and LeCun thinks it'll play a diminishing role as self-supervised learning comes into wider use. Instead of relying on annotations, self-supervised learning algorithms generate labels from data by exposing relationships among the data's parts, a step believed to be critical to achieving human-level intelligence. "Most of what we learn as humans and most of what animals learn is in a self-supervised mode, not a reinforcement mode. It's basically observing the world and interacting with it a little bit, mostly by observation in a test-independent way," said LeCun.