Goto

Collaborating Authors

neuron


PyTorch for Beginners - Building Neural Networks

#artificialintelligence

Deep learning and neural networks are big buzzwords of the decade. Neural Networks are based on the elements of the biological nervous system and they try to imitate its behavior. They are composed of small processing units – neurons and weighted connections between them. The weight of the connection simulates a number of neurotransmitters transferred among neurons. Mathematically, we can define Neural Network as a sorted triple (N, C, w), where N is set of neurons, C is set {(i, j) i, j N} whose elements are connections between neurons i and j, and w(i, j) is the weight of the connection between neurons i and j.


11 Essential Neural Network Architectures, Visualized & Explained

#artificialintelligence

The perceptron is the most basic of all neural networks, being a fundamental building block of more complex neural networks. It simply connects an input cell and an output cell. The feed-forward network is a collection of perceptrons, in which there are three fundamental types of layers -- input layers, hidden layers, and output layers. During each connection, the signal from the previous layer is multiplied by a weight, added to a bias, and passed through an activation function. Feed-forward networks use backpropagation to iteratively update the parameters until it achieves a desirable performance.


Are Better Machine Training Approaches Ahead?

#artificialintelligence

We live in a time of unparalleled use of machine learning (ML), but it relies on one approach to training the models that are implemented in artificial neural networks (ANNs) -- so named because they're not neuromorphic. But other training approaches, some of which are more biomimetic than others, are being developed. The big question remains whether any of them will become commercially viable. ML training frequently is divided into two camps -- supervised and unsupervised. As it turns out, the divisions are not so clear-cut. The variety of approaches that exists defies neat pigeonholing.


Artificial Intelligence - Hype or The Real Deal - Investment Cache

#artificialintelligence

Artificial intelligence (AI) gained unprecedented attention within the hedge fund community in recent years. However, AI is not some new kid on the block. In fact, its roots go as far back as the 1940s when Warren McCulloch and Walter Pitts first introduced the neural network. Today, it finds widespread use in applications from identifying images, speech, natural language processing to robotics and more. Similarly, the use of AI techniques for trading or investment is not a new idea either. But it was not successful in any big way in the earlier attempts. So why is everyone so excited about using AI for investments again? From my own lens, I attribute this to a confluence of technology advances and changing market dynamics. Our technology have improved by leaps and bounds over the years. My first encounter with a PC was an 8-bit Apple machine with a monochrome CRT monitor running on MS DOS. Then came machines with more powerful Intel processors.


DeepDream: How Alexander Mordvintsev Excavated the Computer's Hidden Layers

#artificialintelligence

Early in the morning on May 18, 2015, Alexander Mordvintsev made an amazing discovery. He had been having trouble sleeping. Just after midnight, he awoke with a start. He was sure he'd heard a noise in the Zurich apartment where he lived with his wife and child. Afraid that he hadn't locked the door to the terrace, he ran out of the bedroom to check if there was an intruder. All was fine; the terrace door was locked, and there was no intruder.


Is The Brain An Effective Artificial Intelligence Model?

#artificialintelligence

In the summer of 2009, the Israeli neuroscientist Henry Markram endeavored onto the TED stage in Oxford, England, and introduced an immodest proposal: he and his colleagues would develop a full human brain simulation inside a supercomputer within a decade. They had been mapping the cells in the neocortex, the supposed seat of thought and perception, for years already. "It's a bit like going and cataloging one piece of rainforest," explained Markram. "How many trees it has? What features are the trees? "His team would now establish a virtual Silicon rainforest from which they hoped artificial intelligence would evolve organically.


Artificial intelligence chips benefit from a good night's sleep

#artificialintelligence

Artificial neurons are already far more human-like than traditional computers, and now it turns out they might also need sleep to function at their peak. And it's not just a matter of turning them off every now and then – a new study shows that the neurons benefit from exposure to slow-wave signals like those in a sleeping biological brain. Neural networks are made up of artificial neurons, which all signal to each other like real neurons do in a real brain. Commonly used connections are reinforced over time, effectively allowing neural networks to learn on their own. Unlike the sequential processing of traditional computers, neural networks can process different streams of information in parallel, which makes them powerful tools for things like image and speech recognition.


Neuromorphic Computing: The Next-Level Artificial Intelligence

#artificialintelligence

Can AI function like a human brain? But now, armed with Neuromorphic Computing, they are ready to show the world that their dream can change the world for better. As we unearth the benefits, the success of our machine learning and AI quest seem to depend to a great extent on the success of Neuromorphic Computing. The technologies of the future like autonomous vehicles and robots will need access to and utilization of an enormous amount of data and information in real-time. Today, to a limited extent, this is done by machine learning and AI that depend on supercomputer power.


Deep Learning in Simple Words

#artificialintelligence

There are two main steps in the conventional machine learning or ML pipeline: feature extraction and classification. The goal of feature extraction is to represent data in a numerical space, also called feature space. The goal of classification is to determine the group that each data point belongs to. If we can simply design a classifier to separate data into classes within the feature space, it means that feature extraction and classification work as needed. However, the story is not always as simple as this.


How to tell if your model is over-fit using unlabeled data

#artificialintelligence

In many settings, unlabeled data is plentiful (think images, text, etc), while sufficient labeled data for supervised learning might be harder to obtain. In these situations, it can be difficult to determine how well the model will generalize. Most methods for assessing model performance rely on labeled data alone, e.g. Without enough labeled data these can be unreliable. Is there anything more we can learn about the model's ability to generalize from unlabeled data? In this article, I demonstrate how unlabeled data can frequently be used to bound test loss.