Goto

Collaborating Authors

generalization


Artificial neural networks are more similar to the brain than they get credit for

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you've never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation. Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories.


Learning local and compositional representations for zero-shot learning - Microsoft Research

#artificialintelligence

In computer vision, one key property we expect of an intelligent artificial model, agent, or algorithm is that it should be able to correctly recognize the type, or class, of objects it encounters. This is critical in numerous important real-world scenarios--from biomedicine, where an intelligent system might be tasked with distinguishing between cancerous cells and healthy ones, to self-driving cars, where being able to discriminate between pedestrians, other vehicles, and road signs is crucial to successfully and safely navigating roads. Deep learning is one of the most significant tools for state-of-the-art systems in computer vision, and its use has resulted in models that have reached or can even exceed human-level performance in important and challenging real-world image classification tasks. Despite their successes, these models still have difficulty generalizing, or adapting to tasks in testing or deployment scenarios that don't closely resemble the tasks they were trained on. For example, a visual system trained under typical weather conditions in Northern California may fail to properly recognize pedestrians in Quebec because of differences in weather, clothes, demographics, and other features.


Research: Artificial neural networks are more similar to the brain than we thought

#artificialintelligence

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence. Consider the animal in the following image. If you recognize it, a quick series of neuron activations in your brain will link its image to its name and other information you know about it (habitat, size, diet, lifespan, etc…). But if like me, you've never seen this animal before, your mind is now racing through your repertoire of animal species, comparing tails, ears, paws, noses, snouts, and everything else to determine which bucket this odd creature belongs to. Your biological neural network is reprocessing your past experience to deal with a novel situation. Our brains, honed through millions of years of evolution, are very efficient processing machines, sorting out the ton of information we receive through our sensory inputs, associating known items with their respective categories. That picture, by the way, is an Indian civet, an endangered species that has nothing to do with cats, dogs, and rodents.


Modeling Word Learning and Processing with Recurrent Neural Networks

#artificialintelligence

The paper focuses on what two different types of Recurrent Neural Networks, namely a recurrent Long Short-Term Memory and a recurrent variant of self-organizing memories, a Temporal Self-Organizing Map, can tell us about speakers' learning and processing a set of fully inflected verb forms selected from the top-frequency paradigms of Italian and German. Both architectures, due to the re-entrant layer of temporal connectivity, can develop a strong sensitivity to sequential patterns that are highly attested in the training data. The main goal is to evaluate learning and processing dynamics of verb inflection data in the two neural networks by focusing on the effects of morphological structure on word production and word recognition, as well as on word generalization for untrained verb forms. For both models, results show that production and recognition, as well as generalization, are facilitated for verb forms in regular paradigms. However, the two models are differently influenced by structural effects, with the Temporal Self-Organizing Map more prone to adaptively find a balance between processing issues of learnability and generalization, on the one side, and discriminability on the other side.


Curriculum for Reinforcement Learning

#artificialintelligence

A curriculum is an efficient tool for humans to progressively learn from simple concepts to hard problems. It breaks down complex knowledge by providing a sequence of learning steps of increasing difficulty. In this post, we will examine how the idea of curriculum can help reinforcement learning models learn to solve complicated tasks. It sounds like an impossible task if we want to teach integral or derivative to a 3-year-old who does not even know basic arithmetics. That's why education is important, as it provides a systematic way to break down complex knowledge and a nice curriculum for teaching concepts from simple to hard. A curriculum makes learning difficult things easier and approachable for us humans.


How to Prevent Overfitting in Machine Learning Models

#artificialintelligence

Very deep neural networks with a huge number of parameters are very robust machine learning systems. But, in this type of massive networks, overfitting is a common serious problem. Learning how to deal with overfitting is essential to mastering machine learning. The fundamental issue in machine learning is the tension between optimization and generalization. Optimization refers to the process of adjusting a model to get the best performance possible on the training data (the learning in machine learning), whereas generalization refers to how well the trained model performs on the data that it has never seen before (test set).


Mining Class Comparisons In Data Mining

#artificialintelligence

The presentation- Data is presented as generalized relations, crosstabs, bar charts, pie charts, or rules, contrasting measures to reflect a comparison between target and contrasting classes.


Deep Face Recognition - GeeksforGeeks

#artificialintelligence

DeepFace is the facial recognition system used by Facebook for tagging images. It was proposed by researchers at Facebook AI Research (FAIR) at the 2014 IEEE Computer Vision and Pattern Recognition Conference (CVPR). This approach focuses on alignment and representation of facial images. We will discuss these two part in detail. Alignment: The goal of this alignment part is to generate frontal face from the input image that may contain faces from different pose and angles.



The Variational Bandwidth Bottleneck: Stochastic Evaluation on an Information Budget

arXiv.org Machine Learning

In many applications, it is desirable to extract only the relevant information from complex input data, which involves making a decision about which input features are relevant. The information bottleneck method formalizes this as an information-theoretic optimization problem by maintaining an optimal tradeoff between compression (throwing away irrelevant input information), and predicting the target. In many problem settings, including the reinforcement learning problems we consider in this work, we might prefer to compress only part of the input. This is typically the case when we have a standard conditioning input, such as a state observation, and a "privileged" input, which might correspond to the goal of a task, the output of a costly planning algorithm, or communication with another agent. In such cases, we might prefer to compress the privileged input, either to achieve better generalization (e.g., with respect to goals) or to minimize access to costly information (e.g., in the case of communication). Practical implementations of the information bottleneck based on variational inference require access to the privileged input in order to compute the bottleneck variable, so although they perform compression, this compression operation itself needs unrestricted, lossless access. In this work, we propose the variational bandwidth bottleneck, which decides for each example on the estimated value of the privileged information before seeing it, i.e., only based on the standard input, and then accordingly chooses stochastically, whether to access the privileged input or not. We formulate a tractable approximation to this framework and demonstrate in a series of reinforcement learning experiments that it can improve generalization and reduce access to computationally costly information.