Goto

Collaborating Authors

Doubly Convolutional Neural Networks

Neural Information Processing Systems

In this paper, we propose doubly convolutional neural networks (DCNNs), which significantly improve the performance of CNNs by further exploring this idea. In stead of allocating a set of convolutional filters that are independently learned, a DCNN maintains groups of filters where filters within each group are translated versions of each other. Practically, a DCNN can be easily implemented by a two-step convolution procedure, which is supported by most modern deep learning libraries. We perform extensive experiments on three image classification benchmarks: CIFAR-10, CIFAR-100 and ImageNet, and show that DCNNs consistently outperform other competing architectures. We have also verified that replacing a convolutional layer with a doubly convolutional layer at any depth of a CNN can improve its performance.


Advanced Topics in Deep Convolutional Neural Networks

#artificialintelligence

Throughout this article, I will discuss some of the more complex aspects of convolutional neural networks and how they related to specific tasks such as object detection and facial recognition. This article is a natural extension to my article titled: Simple Introductions to Neural Networks. I recommend looking at this before tackling the rest of this article if you are not well-versed in the idea and function of convolutional neural networks. Due to the excessive length of the original article, I have decided to leave out several topics related to object detection and facial recognition systems, as well as some of the more esoteric network architectures and practices currently being trialed in the research literature. I will likely discuss these in a future article related more specifically to the application of deep learning for computer vision.


Bayesian Graph Convolutional Neural Networks Using Non-Parametric Graph Learning

arXiv.org Machine Learning

Graph convolutional neural networks (GCNN) have been successfully applied to many different graph based learning tasks including node and graph classification, matrix completion, and learning of node embeddings. Despite their impressive performance, the techniques have a limited capability to incorporate the uncertainty in the underlined graph structure. In order to address this issue, a Bayesian GCNN (BGCN) framework was recently proposed. In this framework, the observed graph is considered to be a random realization from a parametric random graph model and the joint Bayesian inference of the graph and GCNN weights is performed. In this paper, we propose a non-parametric generative model for graphs and incorporate it within the BGCN framework. In addition to the observed graph, our approach effectively uses the node features and training labels in the posterior inference of graphs and attains superior or comparable performance in benchmark node classification tasks.



Convolutional Neural Networks Coursera

@machinelearnbot

About this course: This course will teach you how to build convolutional neural networks and apply it to image data. Thanks to deep learning, computer vision is working far better than just two years ago, and this is enabling numerous exciting applications ranging from safe autonomous driving, to accurate face recognition, to automatic reading of radiology images. You will: - Understand how to build a convolutional neural network, including recent variations such as residual networks. This is the fourth course of the Deep Learning Specialization.