Goto

Collaborating Authors

prediction


AI Technique Copies Human Memory To Minimize Data Storage Burden

#artificialintelligence

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay." First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect - "surprisingly efficiently" - deep neural networks from "catastrophic forgetting" - upon learning new lessons, the networks forget what they had learned before. Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting. They write, "One solution would be to store previously encountered examples and revisit them when learning something new. Although such'replay' or'rehearsal' solves catastrophic forgetting," they add, "constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly."


Selecting the Right Bounding Box Using Non-Max Suppression (with implementation)

#artificialintelligence

There are various algorithms for object detection tasks and these algorithms have evolved in the last decade. To improve the performance further, and capture objects of different shapes and sizes, the algorithms predict multiple bounding boxes, of different sizes and aspect ratios. But of all the bounding boxes, how is the most appropriate and accurate bounding box selected? This is where NMS comes into the picture. The objects in the image can be of different sizes and shapes, and to capture each of these perfectly, the object detection algorithms create multiple bounding boxes.


Can Neural Networks Show Imagination? DeepMind Thinks they Can

#artificialintelligence

I recently started a new newsletter focus on AI education. TheSequence is a no-BS( meaning no hype, no news etc) AI-focused newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Creating agents that resemble the cognitive abilities of the human brain has been one of the most elusive goals of the artificial intelligence(AI) space. Recently, I've been spending time on a couple of scenarios that relate to imagination in deep learning systems which reminded me of a very influential paper Alphabet's subsidiary DeepMind published last year in this subject.


How we remember could help AI be less forgetful

#artificialintelligence

A brain mechanism referred to as "replay" inspired researchers at Baylor College of Medicine to develop a new method to protect deep neural networks, found in artificial intelligence (AI), from forgetting what they have previously learned. The study, in the current edition of Nature Communications, has implications for both neuroscience and deep learning. Deep neural networks are the main drivers behind the recent fast progress in AI. These networks are extremely good at learning to solve individual tasks. However, when they are trained on a new task, they typically lose the ability to solve the previously learned task completely.


Maximizing the Impact of ML in Production - insideBIGDATA

#artificialintelligence

In this special guest feature, Emily Kruger, Vice President of Product at Kaskada, discusses the topic that is on the minds of many data scientists and data engineers these days, maximizing the impact of machine learning in production environments. Kaskada is a machine learning company that enables collaboration among data scientists and data engineers. Kaskada develops a machine learning studio for feature engineering using event-based data. Kaskada's platform allows data scientists to unify the feature engineering process across their organizations with a single platform for feature creation and feature serving. Machine learning is changing the way the world does business.


The brain's memory abilities inspire AI experts in making neural networks less 'forgetful'

#artificialintelligence

Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay." First author and postdoctoral researcher Gido van de Ven and principal investigator Andreas Tolias at Baylor, with Hava Siegelmann at UMass Amherst, write in Nature Communications that they have developed a new method to protect--"surprisingly efficiently"--deep neural networks from "catastrophic forgetting;" upon learning new lessons, the networks forget what they had learned before. Siegelmann and colleagues point out that deep neural networks are the main drivers behind recent AI advances, but progress is held back by this forgetting. They write, "One solution would be to store previously encountered examples and revisit them when learning something new. Although such'replay' or'rehearsal' solves catastrophic forgetting," they add, "Constantly retraining on all previously learned tasks is highly inefficient and the amount of data that would have to be stored becomes unmanageable quickly."


Explaining Your Machine Learning Models with SHAP and LIME!

#artificialintelligence

Welcome back again to another data science quick tip. This particular post is most interesting for me not only because this is the most complex subject we've tackled to date, but it's also one that I just spent the last few hours learning myself. And of course, what better way to learn than to figure out how to teach it to the masses? Before getting into it, I've uploaded all the work shown in this post to a singular Jupyter notebook. You can find it at my personal GitHub if you'd like to follow along more closely.


AI-Tables in MariaDB

#artificialintelligence

Let's set up the required configuration and start MindsDB. If you are following this tutorial with your own data, you can skip to the next section. For this example we will use the Audi Car Price dataset from the 100k used cars scraped data. The dataset contains information on price, transmission, mileage, fuel type, road tax, miles per gallon (mpg), and engine size of the used cars in the UK. The idea is to predict the price depending on the above features. The first thing we need to do is to create the table.


Bringing Industry 4.0 to You - ProcessMiner

#artificialintelligence

For most of us, the building years of our lives were shaped by the books we read. A generation acquired its knowledge caressing through dry books with stenciled alphabets. From learning our ABCs to Shakespeare's sonnets, Industrial Revolution ensured that its role in shaping world history through its machinery, chemicals, steam and more, is kept alive and documented via its own production of printing machines. The Industrial Revolution paved the way for the life we know today and far surpassed the era of simplistic conveyor belts and heavy manual surveillance. Production lines employ machinery and humans alike.


Digital Analytics

#artificialintelligence

Invented by Geoffrey Hinton in 1985, Restricted Boltzmann Machine which falls under the category of unsupervised learning algorithms is a network of symmetrically connected neuron-like units that make stochastic decisions. This deep learning algorithm became very popular after the Netflix Competition where RBM was used as a collaborative filtering technique to predict user ratings for movies and beat most of its competition. It is useful for regression, classification, dimensionality reduction, feature learning, topic modelling and collaborative filtering. Restricted Boltzmann Machines are stochastic two layered neural networks which belong to a category of energy based models that can detect inherent patterns automatically in the data by reconstructing input. They have two layers visible and hidden.