Prügel-Bennett, Adam
Rethinking Deep Thinking: Stable Learning of Algorithms using Lipschitz Constraints
Bear, Jay, Prügel-Bennett, Adam, Hare, Jonathon
Iterative algorithms solve problems by taking steps until a solution is reached. Models in the form of Deep Thinking (DT) networks have been demonstrated to learn iterative algorithms in a way that can scale to different sized problems at inference time using recurrent computation and convolutions. However, they are often unstable during training, and have no guarantees of convergence/termination at the solution. This paper addresses the problem of instability by analyzing the growth in intermediate representations, allowing us to build models (referred to as Deep Thinking with Lipschitz Constraints (DT-L)) with many fewer parameters and providing more reliable solutions. Additionally our DT-L formulation provides guarantees of convergence of the learned iterative procedure to a unique solution at inference time. We demonstrate DT-L is capable of robustly learning algorithms which extrapolate to harder problems than in the training set. We benchmark on the traveling salesperson problem to evaluate the capabilities of the modified system in an NP-hard problem where DT fails to learn.
Semantic Segmentation by Semantic Proportions
Aysel, Halil Ibrahim, Cai, Xiaohao, Prügel-Bennett, Adam
Semantic segmentation is a critical task in computer vision that aims to identify and classify individual pixels in an image, with numerous applications for example autonomous driving and medical image analysis. However, semantic segmentation can be super challenging particularly due to the need for large amounts of annotated data. Annotating images is a time-consuming and costly process, often requiring expert knowledge and significant effort. In this paper, we propose a novel approach for semantic segmentation by eliminating the need of ground-truth segmentation maps. Instead, our approach requires only the rough information of individual semantic class proportions, shortened as semantic proportions. It greatly simplifies the data annotation process and thus will significantly reduce the annotation time and cost, making it more feasible for large-scale applications. Moreover, it opens up new possibilities for semantic segmentation tasks where obtaining the full ground-truth segmentation maps may not be feasible or practical. Extensive experimental results demonstrate that our approach can achieve comparable and sometimes even better performance against the benchmark method that relies on the ground-truth segmentation maps. Utilising semantic proportions suggested in this work offers a promising direction for future research in the field of semantic segmentation.
Deep Set Prediction Networks
Zhang, Yan, Hare, Jonathon, Prügel-Bennett, Adam
We study the problem of predicting a set from a feature vector with a deep neural network. Existing approaches ignore the set structure of the problem and suffer from discontinuity issues as a result. We propose a general model for predicting sets that properly respects the structure of sets and avoids this problem. With a single feature vector as input, we show that our model is able to auto-encode point sets, predict bounding boxes of the set of objects in an image, and predict the attributes of these objects in an image.
FSPool: Learning Set Representations with Featurewise Sort Pooling
Zhang, Yan, Hare, Jonathon, Prügel-Bennett, Adam
We introduce a pooling method for sets of feature vectors based on sorting features across elements of the set. This allows a deep neural network for sets to learn more flexible representations. We also demonstrate how FSPool can be used to construct a permutation-equivariant auto-encoder. On a toy dataset of polygons and a set version of MNIST, we show that such an auto-encoder produces considerably better reconstructions. Used in set classification, FSPool significantly improves accuracy and convergence speed on the set versions of MNIST and CLEVR.
Learning Representations of Sets through Optimized Permutations
Zhang, Yan, Hare, Jonathon, Prügel-Bennett, Adam
Representations of sets are challenging to learn because operations on sets should be permutation-invariant. To this end, we propose a Permutation-Optimisation module that learns how to permute a set end-to-end. The permuted set can be further processed to learn a permutation-invariant representation of that set, avoiding a bottleneck in traditional set models. We demonstrate our model's ability to learn permutations and set representations with either explicit or implicit supervision on four datasets, on which we achieve state-of-the-art results: number sorting, image mosaics, classification from image mosaics, and visual question answering.
Non-Linear Statistical Analysis and Self-Organizing Hebbian Networks
Shapiro, Jonathan L., Prügel-Bennett, Adam
Linear neurons learning under an unsupervised Hebbian rule can learn to perform a linear statistical analysis ofthe input data. This was first shown by Oja (1982), who proposed a learning rule which finds the first principal component of the variance matrix of the input data. Based on this model, Oja (1989), Sanger (1989), and many others have devised numerous neural networks which find many components of this matrix. These networks perform principal component analysis (PCA), a well-known method of statistical analysis.
Non-Linear Statistical Analysis and Self-Organizing Hebbian Networks
Shapiro, Jonathan L., Prügel-Bennett, Adam
Linear neurons learning under an unsupervised Hebbian rule can learn to perform a linear statistical analysis ofthe input data. This was first shown by Oja (1982), who proposed a learning rule which finds the first principal component of the variance matrix of the input data. Based on this model, Oja (1989), Sanger (1989), and many others have devised numerous neural networks which find many components of this matrix. These networks perform principal component analysis (PCA), a well-known method of statistical analysis.