Goto

Collaborating Authors

Clever Artificial Intelligence Hides Information to Cheat Later at Task

#artificialintelligence

Artificial Intelligence has become so intelligent that it is learning when to hide some information which can be used later. Research from Stanford University and Google discovered that a machine learning agent tasked with transforming aerial images into map was hiding information in order to cheat later. CycleGAN is a neural network that learns to transform images. In the early results, the machine learning agent was doing well but later when it was asked to do the reverse process of reconstructing aerial photographs from street maps it showed up information which was eliminated in the first process, TechCrunch reported. For instance, skylights on a roof that were eliminated in the process of creating a street map would reappear when the agent was asked to reverse the process.


Neural Network Learning: Theoretical Foundations

AI Magazine

Machine learning, and more particularly learning with neural networks, can be viewed as just such a phenomenon. Frequently remarkable performance is obtained by training networks to perform relatively complex AI tasks. The need for a fuller theoretical analysis and understanding of their performance has been a major research objective for the last decade. Neural Network Learning: Theoretical Foundations reports on important developments that have been made toward this goal within the computational learning theory framework.


Q&A: The Network Effect

Communications of the ACM

Deep learning might be a booming field these days, but few people remember its time in the intellectual wilderness better than Yann LeCun, director of Facebook Artificial Intelligence Research (FAIR) and a part-time professor at New York University. LeCun developed convolutional neural networks while a researcher at Bell Laboratories in the late 1980s. Now, the group he leads at Facebook is using them to improve computer vision, to make predictions in the face of uncertainty, and even to understand natural language. Your work at FAIR ranges from long-term theoretical research to applications that have real product impact.


Entropy and mutual information in models of deep neural networks

Neural Information Processing Systems

We examine a class of stochastic deep learning models with a tractable method to compute information-theoretic quantities. Our contributions are three-fold: (i) We show how entropies and mutual informations can be derived from heuristic statistical physics methods, under the assumption that weight matrices are independent and orthogonally-invariant. (ii) We extend particular cases in which this result is known to be rigorously exact by providing a proof for two-layers networks with Gaussian random weights, using the recently introduced adaptive interpolation method. We study the behavior of entropies and mutual information throughout learning and conclude that, in the proposed setting, the relationship between compression and generalization remains elusive. Papers published at the Neural Information Processing Systems Conference.


Capsule Networks: The New Deep Learning Network – Towards Data Science

#artificialintelligence

Convolutional Networks have been hugely successful in the field of deep learning and they are the primary reason why deep learning is so popular right now! They have been very successful, but they have drawbacks in their basic architecture, causing them to not work very well for some tasks. CNN's detect features in images and learn how to recognize objects with this information. Layers near the start detecting really simple features like edges and layers that are deeper can detect more complex features like eyes, noses, or an entire face. It then uses all of these features which it has learned, to make a final prediction.