Zhai, Shuangfei, Cheng, Yu, Zhang, Zhongfei (Mark), Lu, Weining
In this paper, we propose doubly convolutional neural networks (DCNNs), which significantly improve the performance of CNNs by further exploring this idea. In stead of allocating a set of convolutional filters that are independently learned, a DCNN maintains groups of filters where filters within each group are translated versions of each other. Practically, a DCNN can be easily implemented by a two-step convolution procedure, which is supported by most modern deep learning libraries. We perform extensive experiments on three image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and show that DCNNs consistently outperform other competing architectures. We have also verified that replacing a convolutional layer with a doubly convolutional layer at any depth of a CNN can improve its performance.
Pal, Soumyasundar, Regol, Florence, Coates, Mark
Graph convolutional neural networks (GCNN) have numerous applications in different graph based learning tasks. Although the techniques obtain impressive results, they often fall short in accounting for the uncertainty associated with the underlying graph structure. In the recently proposed Bayesian GCNN (BGCN) framework, this issue is tackled by viewing the observed graph as a sample from a parametric random graph model and targeting joint inference of the graph and the GCNN weights. In this paper, we introduce an alternative generative model for graphs based on copying nodes and incorporate it within the BGCN framework. Our approach has the benefit that it uses information provided by the node features and training labels in the graph topology inference. Experiments show that the proposed algorithm compares favorably to the state-of-the-art in benchmark node classification tasks.
Wendler, Chris, Püschel, Markus, Alistarh, Dan
We present a novel class of convolutional neural networks (CNNs) for set functions, i.e., data indexed with the powerset of a finite set. The convolutions are derived as linear, shift-equivariant functions for various notions of shifts on set functions. The framework is fundamentally different from graph convolutions based on the Laplacian, as it provides not one but several basic shifts, one for each element in the ground set. Prototypical experiments with several set function classification tasks on synthetic datasets and on datasets derived from real-world hypergraphs demonstrate the potential of our new powerset CNNs. Papers published at the Neural Information Processing Systems Conference.
"The implementation part is very good and up-too the mark. The explanation step by step process is very good." (February 2018). "course done very well; everything is explained in detail; really satisfied!!!" (February 2018). "Difficult topics are simply illustrated and therefore easy to understand." (January 2018).
Machine learning algorithms have been available since the 1990s, but it is much more recently that they have come into use also in the physical sciences. While these algorithms have already proven to be useful in uncovering new properties of materials and in simplifying experimental protocols, their usage in liquid crystals research is still limited. This is surprising because optical imaging techniques are often applied in this line of research, and it is precisely with images that machine learning algorithms have achieved major breakthroughs in recent years. Here we use convolutional neural networks to probe several properties of liquid crystals directly from their optical images and without using manual feature engineering. By optimizing simple architectures, we find that convolutional neural networks can predict physical properties of liquid crystals with exceptional accuracy.