Goto

Collaborating Authors

Results


Breakthrough Days: The Urgency of Science + Collective Problem Solving - AI for Good Global Summit 2020

#artificialintelligence

Yoshua Bengio is recognized as one of the world's leading experts in artificial intelligence and a pioneer in deep learning. Since 1993, he has been a professor in the Department of Computer Science and Operational Research at the Université de Montréal. He is the founder and scientific director of Mila, the Quebec Institute of Artificial Intelligence, the world's largest university-based research group in deep learning. He is a member of the NeurIPS board and co-founder and general chair for the ICLR conference, as well as program director of the CIFAR program on Learning in Machines and Brains and is Fellow of the same institution. In 2018, Yoshua Bengio ranked as the computer scientist with the most new citations, worldwide, thanks to his many publications.


[D] Non-US research groups working on Deep Learning?

#artificialintelligence

Almost every group on earth is working on'deep learning' in some form. In Canada there are the big three research units: MILA at Montreal, Vector at Toronto, AMII at Edmonton. Both MILA and Vector have several research groups/universities affiliated to them in Quebec and Ontario respectively. Weirdly folks at UBC are also affiliated with Vector. AMII is mostly University of Alberta.


DeepMind Found New Approach To Create Faster RL Models

#artificialintelligence

Recently, researchers from DeepMind and McGill University proposed new approaches to speed up the solution of complex reinforcement learning problems. They mainly introduced a divide and conquer approach to reinforcement learning (RL), which is combined with deep learning to scale up the potentials of the agents. For a few years now, reinforcement learning has been providing a conceptual framework in order to address several fundamental problems. This algorithm has been utilised in several applications, such as to model robots, simulate artificial limbs, developing self-driving cars, play games like poker, Go, and more. Also, the recent combination of reinforcement learning with deep learning added several impressive achievements and is found to be a promising approach to tackle important sequential decision-making problems that are currently intractable.


Complete Machine Learning and Data Science: Zero to Mastery

#artificialintelligence

Created by Andrei Neagoie English [Auto] Students also bought The Complete Web Developer in 2020: Zero to Mastery Deno: The Complete Guide Zero to Mastery Learning to Learn [Efficient Learning]: Zero to Mastery Break Away: Programming And Coding Interviews How to Make Films With an iPhone: For Beginners Master the Coding Interview: Data Structures Algorithms Preview this course GET COUPON CODE Description This is a brand new Machine Learning and Data Science course just launched January 2020 and updated this month with the latest trends and skills! Become a complete Data Scientist and Machine Learning engineer! Join a live online community of 270,000 engineers and a course taught by industry experts that have actually worked for large companies in places like Silicon Valley and Toronto. Graduates of Andrei's courses are now working at Google, Tesla, Amazon, Apple, IBM, JP Morgan, Facebook, other top tech companies. Learn Data Science and Machine Learning from scratch, get hired, and have fun along the way with the most modern, up-to-date Data Science course on Udemy (we use the latest version of Python, Tensorflow 2.0 and other libraries).


Shipwrecks detection using bathymetric data

#artificialintelligence

In this notebook, we will use bathymetry data provided by NOAA to detect shipwrecks from the Shell Bank Basin area located near New York City in United States. A Bathymetric Attributed Grid (BAG) is a two-band imagery where one of the bands is elevation and the other is uncertainty (define uncertainty of elevation value). We have applied deep learning methods after pre-processing the data (which is explained in Preprocess bathymetric data) for the detection. One important step in pre-processing is applying shaded relief function provided in ArcGIS which is also used by NOAA in one of their BAG visualizations here. Shaded Relief is a 3D representation of the terrain which differentiate the shipwrecks distinctly from the background and reveals them.


Deep Learning for COVID-19 Diagnosis

#artificialintelligence

Over the last several months, the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has rapidly become a global pandemic, resulting in nearly 480,000 COVID-19 related deaths as of June 25, 2020 [6]. While the disease can manifest in a variety of ways--ranging from asymptomatic conditions or flu-like symptoms to acute respiratory distress syndrome--the most common presentation associated with morbidity and mortality is the presence of opacities and consolidation in a patient's lungs. Upon inhalation, the virus attacks and inhibits the lungs' alveoli, which are responsible for oxygen exchange. This opacification is visible on computed tomography (CT) scans. Due to their increased densities, these areas appear as partially opaque regions with increased attenuation, which is known as a ground-glass opacity (GGO).


The human brain built by AI: A transatlantic collaboration

#artificialintelligence

The Helmholtz International BigBrain Analytics and Learning Laboratory (HIBALL) is a collaboration between McGill University and Forschungszentrum Jülich to develop next-generation high-resolution human brain models using cutting-edge Machine- and Deep Learning methods and high-performance computing. HIBALL is based on the high-resolution BigBrain model first published by the Jülich and McGill teams in 2013. Over the next five years, the lab will be funded with a total of up to 6 million Euro by the German Helmholtz Association, Forschungszentrum Jülich, and Healthy Brains, Healthy Lives at McGill University. In 2003, when Jülich neuroscientist Katrin Amunts and her Canadian colleague Alan Evans began scanning 7,404 histological sections of a human brain, it was completely unclear whether it would ever be possible to reconstruct this brain on the computer in three dimensions. At that time, there were no technical possibilities to cope with the huge amount of data.


AI is reinventing the way we invent

#artificialintelligence

Drug discovery is a hugely expensive and often frustrating process. Medicinal chemists must guess which compounds might make good medicines, using their knowledge of how a molecule's structure affects its properties. They synthesize and test countless variants, and most are failures. "Coming up with new molecules is still an art, because you have such a huge space of possibilities," says Barzilay. "It takes a long time to find good drug candidates." By speeding up this critical step, deep learning could offer far more opportunities for chemists to pursue, making drug discovery much quicker.


Startup Tenstorrent shows AI is changing computing and vice versa

#artificialintelligence

That year, numerous experienced computer chip designers set out on their own to design novel kinds of parts to improve the performance of artificial intelligence. It's taken a few years, but the world is finally seeing what those young hopefuls have been working on. The new chips coming out suggest, as ZDNet has reported in past, that AI is totally changing the nature of computing. It also suggests that changes in computing are going to have an effect on how artificial intelligence programs, such as deep learning neural networks, are designed. Case in point, startup Tenstorrent, founded in 2016 and headquartered in Toronto, Canada, on Thursday unveiled its first chip, "Grayskull," at a microprocessor conference run by the legendary computer chip analysis firm The Linley Group.


Self-supervised learning in Audio and Speech

#artificialintelligence

The ongoing success of deep learning techniques depends on the quality of the representations automatically discovered from data 1. These representations must capture important underlying structures from the raw input, e.g., intermediate concepts, features, or latent variables that are useful for the downstream task. While supervised learning using large annotated corpora can leverage useful representations, collecting large amounts of annotated examples is costly, time-consuming, and not always feasible. This is particularly problematic for a large variety of applications. In the speech domain, for instance, there are many low-resource languages, where the progress is dramatically slower than in high-resource languages such as English.