Goto

Collaborating Authors

Latest Research Based On AI Building AI Models

#artificialintelligence

Artificial intelligence (AI) is primarily a math problem. We finally have enough data and processing capacity to take full advantage of deep neural networks, a type of AI that learns to discover patterns in data, when they began to surpass standard algorithms 10 years ago. Today's neural networks are even more data and processing power-hungry. Training them necessitates fine-tuning the values of millions, if not billions, of parameters that describe these networks and represent the strength of artificial neuron connections. The goal is to discover almost perfect values for them, known as optimization, but training the networks to get there is difficult.


Researchers Build AI That Builds AI

#artificialintelligence

A hypernetwork aims to find the best deep neural network architecture to solve a given task. Boris Knyazev of the University of Guelph in Ontario and his colleagues have designed and trained a "hypernetwork" that could speed up the training of neural networks. Given a new, untrained deep neural network designed for some task, the hypernetwork predicts the parameters for the new network in fractions of a second, and in theory could make training unnecessary. The work may also have deeper theoretical implications. The name outlines the approach.


AI that builds AI: Self-creation technology is taking a new shape

#artificialintelligence

The majority of artificial intelligence (AI) is a game of numbers. Deep neural networks, a type of AI that learns to recognize patterns in data, began outperforming standard algorithms 10 years ago because we ultimately had enough data and processing capabilities to fully utilize them. Today's neural nets are even more data and power-hungry. Training them necessitates fine-tuning the values of millions, if not billions, of parameters that define these networks and represent the strength of interconnections between artificial neurons. The goal is to obtain near-ideal settings for them, a process called optimization, but teaching the networks to get there is difficult.


Graph HyperNetworks for Neural Architecture Search

arXiv.org Machine Learning

Neural architecture search (NAS) automatically finds the best task-specific neural network topology, outperforming many manual architecture designs. However, it can be prohibitively expensive as the search requires training thousands of different networks, while each can last for hours. In this work, we propose the Graph HyperNetwork (GHN) to amortize the search cost: given an architecture, it directly generates the weights by running inference on a graph neural network. GHNs model the topology of an architecture and therefore can predict network performance more accurately than regular hypernetworks and premature early stopping. To perform NAS, we randomly sample architectures and use the validation accuracy of networks with GHN generated weights as the surrogate search signal. GHNs are fast - they can search nearly 10 faster than other random search methods on CIFAR-10 and ImageNet. GHNs can be further extended to the anytime prediction setting, where they have found networks with better speed-accuracy tradeoff than the state-of-the-art manual designs. The success of deep learning marks the transition from manual feature engineering to automated feature learning. However, designing effective neural network architectures requires expert domain knowledge and repetitive trial and error.


Parameter Prediction for Unseen Deep Architectures

arXiv.org Machine Learning

Deep learning has been successful in automating the design of features in machine learning pipelines. However, the algorithms optimizing neural network parameters remain largely hand-designed and computationally inefficient. We study if we can use deep learning to directly predict these parameters by exploiting the past knowledge of training other networks. We introduce a large-scale dataset of diverse computational graphs of neural architectures - DeepNets-1M - and use it to explore parameter prediction on CIFAR-10 and ImageNet. By leveraging advances in graph neural networks, we propose a hypernetwork that can predict performant parameters in a single forward pass taking a fraction of a second, even on a CPU. The proposed model achieves surprisingly good performance on unseen and diverse networks. For example, it is able to predict all 24 million parameters of a ResNet-50 achieving a 60% accuracy on CIFAR-10. On ImageNet, top-5 accuracy of some of our networks approaches 50%. Our task along with the model and results can potentially lead to a new, more computationally efficient paradigm of training networks. Our model also learns a strong representation of neural architectures enabling their analysis.