Goto

Collaborating Authors

Perceptrons


Artificial Neural Network:

#artificialintelligence

Deep learning is a subfield of machine learning concerned with algorithm inspired by the structure and function of the brain called "Artificial Neural Network". In a nutshell, below is the function of a neuron. Axon: is a stem for processing output. Perceptron's work in a similar way like Neuron, it takes input and perform transformations and produces the results. Inside the perceptron, we typically calculate the step function.


Day 48: 60 days of Data Science and Machine Learning Series

#artificialintelligence

Multilayer Perceptron is basically ( or a class of) a feedforward artificial neural network which is composed of an input layer to receive the signal, an output layer that makes a decision or prediction about the input, and an arbitrary number of hidden layers for the computation.


Machine Learning for Zombies

#artificialintelligence

Multilayer Perceptrons (MLP), are complex algorithms that take a lot of compute power and a *ton* of data in order to produce satisfactory results in reasonable timeframes. Let's start with what they're not: neural networks, despite the name and every blog post and intro to machine learning text book you've probably read up till now, are not analogs of the human brain. There are some *very* surface-level similarities, but the actual functionality of a neural network has almost nothing in common with the neurons that make up the approximately three pounds of meat that sits between your ears and defines everything you do and how you experience reality. Just like a lot of other machine learning algorithms, they use the formula "label equals weight times data value plus offset" (or y w*x b) to define where they draw their lines/hyperplanes for making predictions. In machine learning, that slope is called a weight.)


Machine Learning for Zombies

#artificialintelligence

Multilayer Perceptrons (MLP), are complex algorithms that take a lot of compute power and a *ton* of data in order to produce satisfactory results in reasonable timeframes. Let's start with what they're not: neural networks, despite the name and every blog post and intro to machine learning text book you've probably read up till now, are not analogs of the human brain. There are some *very* surface-level similarities, but the actual functionality of a neural network has almost nothing in common with the way the neurons that make up the approximately three pounds of meat that sits between your ears and defines everything you do and how you experience reality. Just like a lot of other machine learning algorithms, they use the formula "label equals weight times data value plus offset" (or y w*x b) to define where they draw their lines/hyperplanes for making predictions. In machine learning, that slope is called a weight.)


Perceptron: Building blocks of today's deep neural networks

#artificialintelligence

Perceptron was the earliest and simplest mathematical model introduced for a biological neuron. Let us understand, the components of the perceptron one by one. Data points represented in the form of a vector is called feature vector. Suppose, we need to predict whether a person has heart disease or not given his daily physical activity, age, diet, educational qualification, and salary. It is very clear that educational qualification and salary have very little to do with whether a person is suffering from heart disease or not.


How to Use PointNet for 3D Computer Vision in an Industrial Context

#artificialintelligence

The architecture of the network is surprisingly simple! It takes N points as an unordered set of 3D points. It applies some transformations to make sure that the order of the points would not matter. And then, those points are passed through a series of MLPs (multi-layer perceptrons) and max pooling layers to get global features at the end. For classification, these features are then fed to another MLP to get K outputs representing K classes.


An Image Patch is a Wave: Phase-Aware Vision MLP

#artificialintelligence

Different from traditional convolutional neural network (CNN) and vision transformer, the multilayer perceptron (MLP) is a new kind of vision model with extremely simple architecture that only stacked by fully-connected layers. An input image of vision MLP is usually split into multiple tokens (patches), while the existing MLP models directly aggregate them with fixed weights, neglecting the varying semantic information of tokens from different images. To dynamically aggregate tokens, we propose to represent each token as a wave function with two parts, amplitude and phase. Amplitude is the original feature and the phase term is a complex value changing according to the semantic contents of input images. Introducing the phase term can dynamically modulate the relationship between tokens and fixed weights in MLP. Based on the wave-like token representation, we establish a novel Wave-MLP architecture for vision tasks. Extensive experiments demonstrate that the proposed Wave-MLP is superior to the state-of-the-art MLP architectures on various vision tasks such as image classification, object detection and semantic segmentation.


A Tutorial on Spiking Neural Networks for Beginners

#artificialintelligence

Despite being quite effective in a variety of tasks across industries, deep learning is constantly evolving, proposing new neural network (NN) architectures, deep learning (DL) tasks, and even brand new concepts of the next generation of NNs, such as the Spiking Neural Network (SNN). SNN was introduced by the researchers at Heidelberg University and the University of Bern developing as a fast and energy-efficient technique for computing using spiking neuromorphic substrates. In this article, we will mostly discuss Spiking Neural Network as a variant of neural network. We will also try to understand how is it different from the traditional neural networks. Below is a list of the important topics to be tackled.


Do You Want To Know How Perceptron Algorithm works Internally

#artificialintelligence

If you are new to the field of Deep Learning, I encourage you to read my previous article about Understand Deep Leaning with Simple exercise-PyTorch which will give you a precise understanding of how neural networks works in general. This article is a more deep dive into the internal working of Neuron/Perceptron which is the building block of Deep Learning Neural Networks architecture. A human brain has billions of neurons. Neurons are interconnected nerve cells in the human brain that are involved in the processing and transmitting chemical and electrical signals. Dendrites are branches that receive information from other neurons.


Singular learning of deep multilayer perceptrons for EEG-based emotion recognition

#artificialintelligence

Human emotion recognition is an important issue in human-computer interactions and electroencephalograph (EEG) has been widely applied to emotion recognition due to its high reliability. In recent years, methods based deep learning technology have reached the state of art performance in EEG-based emotion recognition. However, there exist singularities in the parameter space of deep neural networks, which may dramatically slow down the training process. It is very worthy to investigate the specific influence of singularities when applying deep neural networks to EEG-based emotion recognition. In this paper, we mainly focus on this problem, and analyse the singular learning dynamics of deep multilayer perceptrons theoretically and numerically. The results can help us to design better algorithms to overcome the serious influence of singularities in deep neural networks for EEG-based emotion recognition.