hinton


Is AI Development Moving Too Fast?

#artificialintelligence

Is the development of artificial intelligence (AI) moving too fast? Just a decade ago, AI was little more than a laboratory curiosity. Today, it is an unparalleled economic force that can be found all around us. But sadly, so can many of the problems that AI brings. Bias, deepfakes, privacy violations, and cases of malicious use are now more prevalent than ever before.


What's in a name? The 'deep learning' debate ZDNet

#artificialintelligence

Monday's historic debate between machine learning luminary Yoshua Bengio and machine learning critic Gary Marcus spilled over into a tit for tat between the two in the days following, mostly about the status of the term "deep learning." The history of the term deep learning shows that the use of it has been opportunistic at times but has had little to do in the way of advancing the science of artificial intelligence. Hence, the current debate will likely not go anywhere, ultimately. Monday night's debate found Bengio and Marcus talking about similar-seeming end goals, things such as the need for "hybrid" models of intelligence, maybe combining neural networks with something like a "symbol" class of object. The details were where the two argued about definitions and terminology.


Eight U of T artificial intelligence researchers named CIFAR AI Chairs

#artificialintelligence

Eight University of Toronto artificial intelligence researchers – four of whom are women – have been named CIFAR AI Chairs, a recognition of pioneering work in areas that could have global societal impact. One of the new chairs is Anna Goldenberg, an associate professor of computer science in U of T's Faculty of Arts & Science and the first-ever chair in biomedical informatics and artificial intelligence at the Hospital for Sick Children. She and her colleagues, including U of T's Dr. Peter Laussen, have developed a computer model that uses signals in physiological data, such as a patient's pulse, to detect an oncoming heart attack – giving doctors and nurses vital minutes to intervene and save an infant's life. The early-warning system has been able to predict 70 per cent of heart attacks at least five minutes – and up to 15 minutes – before a patient's heart stops beating. "In machine learning and health care, the key word is prevention," says Goldenberg, whose team is on track to have the system tested in a silent trial in a clinical environment.


This Year's AI (Artificial Intelligence) Breakthroughs

#artificialintelligence

When it comes to AI (Artificial Intelligence), VCs (venture capitalists) continue to be aggressive with their fundings. During the third quarter, 965 AI-related companies in the US raised a total of $13.5 billion. In fact, this year should see a record in total fundings (last year's total came to $16.8 billion). Some of the deals have been, well, staggering. Just look at the $1 billion that Microsoft shelled out for an equity stake in OpenAI (the company is one of the few that is pursuing Strong AI). So what has been the result of all this activity? What have been the breakthroughs for AI this year?


This Year's AI (Artificial Intelligence) Breakthroughs

#artificialintelligence

When it comes to AI (Artificial Intelligence), VCs (venture capitalists) continue to be aggressive with their fundings. During the third quarter, 965 AI-related companies in the US raised a total of $13.5 billion. In fact, this year should see a record in total fundings (last year's total came to $16.8 billion). Some of the deals have been, well, staggering. Just look at the $1 billion that Microsoft shelled out for an equity stake in OpenAI (the company is one of the few that is pursuing Strong AI). So what has been the result of all this activity? What have been the breakthroughs for AI this year?


Label Smoothing & Deep Learning: Google Brain explains why it works and when to use (SOTA tips)

#artificialintelligence

Hinton, Muller and Cornblith from Google Brain released a new paper titled "When does label smoothing help?" and dive deep into the internals of how label smoothing affects the final activation layer for deep neural networks. They built a new visualization method to clarify the internal effects of label smoothing, and provide new insight into how it works internally. While label smoothing is often used, this paper explains the why and how label smoothing affects NN's and valuable insight as to when, and when not, to use label smoothing. This article is a summary of the paper's insights to help you quickly leverage the findings for your own deep learning work. The full paper is recommended for deeper analysis.


This Year's AI (Artificial Intelligence) Breakthroughs

#artificialintelligence

When it comes to AI (Artificial Intelligence), VCs (venture capitalists) continue to be aggressive with their fundings. During the third quarter, 965 AI-related companies in the US raised a total of $13.5 billion. In fact, this year should see a record in total fundings (last year's total came to $16.8 billion). Some of the deals have been, well, staggering. Just look at the $1 billion that Microsoft shelled out for an equity stake in OpenAI (the company is one of the few that is pursuing Strong AI). So what has been the result of all this activity? What have been the breakthroughs for AI this year?


Capsule Networks: A new and attractive AI architecture

#artificialintelligence

Convolutional Neural Networks (CNN) are frequently preferred in computer vision applications because of their successful results on object recognition and classification tasks. CNNs are composed of many neurons stacked together. Computing convolutions across neurons require a lot of computation, so pooling processes are often used to reduce the size of network layers. Convolutional approaches make it possible to learn many complex features of our data with simple computations. By performing many matrix multiplications and summations on our input, we can arrive at an answer to our question.


An Epidemic of AI Misinformation

#artificialintelligence

Maybe every paper abstract should have a mandatory field of what the limitations of the proposed approach are. That way some of the science miscommunications and hypes could maybe be avoided. The media is often tempted to report each tiny new advance in a field, be it AI or nanotechnology, as a great triumph that will soon fundamentally alter our world. Occasionally, of course, new discoveries are underreported. The transistor did not make huge waves when it was first introduced, and few people initially appreciated the full potential of the Internet.


Learning From The Canadian Model Of AI

#artificialintelligence

The country was either prescient or lucky in continuing to fund neural networks research when the US retreated from it in the 1970s and 80s. As a result, Canadian researchers like Geoffrey Hinton, Yann LeCun, and Yoshua Bengio pushed forward the methods we now call "deep learning." These three researchers won the 2018 Turing Award --often called the Nobel equivalent for computer science. Canada is also known in AI for its collegial, public/private ecosystems. They incorporate government funding, venture capital, university research initiatives, and private sector sponsorship.