Goto

Collaborating Authors

bengio


Google is poisoning its reputation with AI researchers

#artificialintelligence

Google has worked for years to position itself as a responsible steward of AI. Its research lab hires respected academics, publishes groundbreaking papers, and steers the agenda at the field's biggest conferences. But now its reputation has been badly, perhaps irreversibly damaged, just as the company is struggling to put a politically palatable face on its empire of data. The company's decision to fire Timnit Gebru and Margaret Mitchell -- two of its top AI ethics researchers, who happened to be examining the downsides of technology integral to Google's search products -- has triggered waves of protest. Academics have registered their discontent in various ways.


High-Ranking Researcher Resigns From Google Artificial Intelligence Team

#artificialintelligence

Google in February fired Mitchell, a lead researcher, following a controversy. Google on Tuesday confirmed a leader of its artificial intelligence team has resigned in a departure that comes after the controversial firing of two colleagues. The internet giant declined to comment further on the resignation of Samy Bengio, who has worked at Google about four years and specialized in machine learning. "While I am looking forward to my next challenge, there's no doubt that leaving this wonderful team is really difficult," Bengio wrote in a work email first cited by Bloomberg. Bengio did not refer to Timnit Gebru or Margaret Mitchell, two former members of the team focused on ethical artificial intelligence.


Artificial Neural Nets Finally Yield Clues to How Brains Learn

#artificialintelligence

In 2007, some of the leading thinkers behind deep neural networks organized an unofficial "satellite" meeting at the margins of a prestigious annual conference on artificial intelligence. The conference had rejected their request for an official workshop; deep neural nets were still a few years away from taking over AI. The bootleg meeting's final speaker was Geoffrey Hinton of the University of Toronto, the cognitive psychologist and computer scientist responsible for some of the biggest breakthroughs in deep nets. He started with a quip: "So, about a year ago, I came home to dinner, and I said, 'I think I finally figured out how the brain works,' and my 15-year-old daughter said, 'Oh, Daddy, not again.'" Hinton continued, "So, here's how it works."


Nvidia's GTC will feature deep learning cabal of LeCun, Hinton, Bengio

ZDNet

Eleven years after Geoffrey Hinton couldn't get a free sample from Nvidia, the Touring Award winner will join his comrades Yoshua Bengio and Yann LeCun at the 2021 GTC conference hosted by Nvidia as a headline speaker, Nvidia announced Tuesday. The event, running April 12 through April 16, will feature the customary keynote from Nvidia CEO Jensen Huang, starting at 8:30am PT on April 12. Said Huang in the press release, "GTC brings together a massive ecosystem of developers, researchers and corporate leaders who are using AI and accelerated computing to change the world "We have our strongest program ever this year, highlighted by Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, among 1,300 sessions focused on every aspect of computing and networking. There is no better place to see the future and how you can help shape it." Hinton, a professor at the University of Toronto, and also a researcher with Google's AI division, along with Bengio of Canada's MILA institute, and LeCun of Facebook, have called themselves co-conspirators in the revival of the once-moribund field of "deep learning." The three all received the prestigious Turing Award, named in honor of computing pioneer Alan Turing, in 2019 for their contributions to computing. The conference will also host the three scholars' arch-nemesis, NYU professor Gary Marcus, who has been a relentless critic of deep learning, and who sparred with Bengio during a 2019 debate. More on the conference is available on the Nvidia website. What has been labeled the deep learning revolution, the break-through in multi-layer perceptrons, or neural networks, circa 2006, is also the trend that made possible the huge expansion in Nvidia's data center business. During a meeting with journalists a year ago in New York, at the annual AAAI conference, Hinton recalled with Mirth how he had been turned down by Nvidia eleven years ago when he'd sought to get a free graphics card. "I made a big mistake back in 2009 with Nvidia," Hinton recalled with a grin "In 2009, I told an audience of 1,000 grad students they should go and buy Nvidia GPUs to speed up their neural nets.


Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models

arXiv.org Machine Learning

Deep generative modelling is a class of techniques that train deep neural networks to model the distribution of training samples. Research has fragmented into various interconnected approaches, each of which making trade-offs including run-time, diversity, and architectural restrictions. In particular, this compendium covers energy-based models, variational autoencoders, generative adversarial networks, autoregressive models, normalizing flows, in addition to numerous hybrid approaches. These techniques are drawn under a single cohesive framework, comparing and contrasting to explain the premises behind each, while reviewing current state-of-the-art advances and implementations.


A.I. Here, There, Everywhere

#artificialintelligence

I wake up in the middle of the night. "Hey, Google, what's the temperature in Zone 2," I say into the darkness. A disembodied voice responds: "The temperature in Zone 2 is 52 degrees." "Set the heat to 68," I say, and then I ask the gods of artificial intelligence to turn on the light. Many of us already live with A.I., an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you've even stepped out of the house on a frigid morning.


Siamese Labels Auxiliary Network(SiLaNet)

arXiv.org Artificial Intelligence

Auxiliary information attracts more and more attention in the area of machine learning. Attempts so far to include such auxiliary information in state-of-the-art learning process have often been based on simply appending these auxiliary features to the data level or feature level. In this paper, we intend to propose a novel training method with new options and architectures. Siamese labels, which were used in the training phase as auxiliary modules. While in the testing phase, the auxiliary module should be removed. Siamese label module makes it easier to train and improves the performance in testing process. In general, the main contributions can be summarized as, 1) Siamese Labels are firstly proposed as auxiliary information to improve the learning efficiency; 2) We establish a new architecture, Siamese Labels Auxiliary Network (SilaNet), which is to assist the training of the model; 3) Siamese Labels Auxiliary Network is applied to compress the model parameters by 50% and ensure the high accuracy at the same time. For the purpose of comparison, we tested the network on CIFAR-10 and CIFAR100 using some common models. The proposed SilaNet performs excellent efficiency both on the accuracy and robustness.


Transformers with Competitive Ensembles of Independent Mechanisms

arXiv.org Artificial Intelligence

An important development in deep learning from the earliest MLPs has been a move towards architectures with structural inductive biases which enable the model to keep distinct sources of information and routes of processing well-separated. This structure is linked to the notion of independent mechanisms from the causality literature, in which a mechanism is able to retain the same processing as irrelevant aspects of the world are changed. For example, convnets enable separation over positions, while attention-based architectures (especially Transformers) learn which combination of positions to process dynamically. In this work we explore a way in which the Transformer architecture is deficient: it represents each position with a large monolithic hidden representation and a single set of parameters which are applied over the entire hidden representation. This potentially throws unrelated sources of information together, and limits the Transformer's ability to capture independent mechanisms. To address this, we propose Transformers with Independent Mechanisms (TIM), a new Transformer layer which divides the hidden representation and parameters into multiple mechanisms, which only exchange information through attention. Additionally, we propose a competition mechanism which encourages these mechanisms to specialize over time steps, and thus be more independent. We study TIM on a large-scale BERT model, on the Image Transformer, and on speech enhancement and find evidence for semantically meaningful specialization as well as improved performance.


Global Cooperation & Guidelines Will Let Countries Use AI For Good

#artificialintelligence

Yoshua Bengio is one of the world's leading experts in artificial intelligence and deep learning. Also known as the father of deep learning, he says that for the world to change for the better with AI, a global shift in how organizations and governments share their research needs to come. In many countries, private companies, government entities, and academic institutions conduct AI research. These places must foster a global culture of open science. These research places the need to rethink how to encourage the development of impactful artificial intelligence.


artificial intelligence: AI Here, There, Everywhere - The Economic Times

#artificialintelligence

By Craig S. Smith I wake up in the middle of the night. "Hey, Google, what's the temperature in Zone 2," I say into the darkness. A disembodied voice responds: "The temperature in Zone 2 is 52 degrees." "Set the heat to 68," I say, and then I ask the gods of artificial intelligence to turn on the light. Many of us already live with AI, an array of unseen algorithms that control our Internet-connected devices, from smartphones to security cameras and cars that heat the seats before you've even stepped out of the house on a frigid morning.