Goto

Collaborating Authors

Bilinear pooling for fine-grained visual recognition and multi-modal deep learning

#artificialintelligence

Bilinear pooling originated in the computer vision community as a method for fine-grained visual recognition. Or in a less fancy language, a method that looks for specific details when recognizing and classifying visual objects. At a high level, the approach works as follows. Given an input image I, we feed I into two different deep convolutional neural networks A and B, see Figure 1. After applying several pooling and non-linear transformations we output a feature map from both A and B. These two networks might be pretrained in order to solve different tasks.


How Transformers work in deep learning and NLP: an intuitive introduction

#artificialintelligence

The famous paper "Attention is all you need" in 2017 changed the way we were thinking about attention. Nonetheless, 2020 was definitely the year of transformers! From natural language now they are into computer vision tasks. How did we go from attention to self-attention? Why does the transformer work so damn well? What are the critical components for its success? Read on and find out! In my opinion, transformers are not so hard to grasp.


A Derivative-free Method for Quantum Perceptron Training in Multi-layered Neural Networks

arXiv.org Artificial Intelligence

In this paper, we present a gradient-free approach for training multi-layered neural networks based upon quantum perceptrons. Here, we depart from the classical perceptron and the elemental operations on quantum bits, i.e. qubits, so as to formulate the problem in terms of quantum perceptrons. We then make use of measurable operators to define the states of the network in a manner consistent with a Markov process. This yields a Dirac-Von Neumann formulation consistent with quantum mechanics. Moreover, the formulation presented here has the advantage of having a computational efficiency devoid of the number of layers in the network. This, paired with the natural efficiency of quantum computing, can imply a significant improvement in efficiency, particularly for deep networks. Finally, but not least, the developments here are quite general in nature since the approach presented here can also be used for quantum-inspired neural networks implemented on conventional computers.


Skip-Gram Neural Network for Graphs

#artificialintelligence

This article will go into more details of node embeddings. If you lack intuition and understanding of node embeddings, check out this previous article that covered the intuition of node embeddings. But if you are ready. In the level-1 explanation of node embeddings I motivated why we need embeddings such that we have a vector form of graph data. Embeddings should capture the graph topology, relationships between nodes and further information.


The Math behind Neural Networks: Part 1 - The Rosenblatt Perceptron

#artificialintelligence

This is the definition of a Linear Combination: it is the sum of some terms multiplied by constant values. In our case the terms are the features and the constants are the weights.