Goto

Collaborating Authors

Which is your favorite Machine Learning Algorithm?

#artificialintelligence

Developed back in the 50s by Rosenblatt and colleagues, this extremely simple algorithm can be viewed as the foundation for some of the most successful classifiers today, including suport vector machines and logistic regression, solved using stochastic gradient descent. The convergence proof for the Perceptron algorithm is one of the most elegant pieces of math I've seen in ML. Most useful: Boosting, especially boosted decision trees. This intuitive approach allows you to build highly accurate ML models, by combining many simple ones. Boosting is one of the most practical methods in ML, it's widely used in industry, can handle a wide variety of data types, and can be implemented at scale.


Reinforcement Learning and DQN, learning to play from pixels - Ruben Fiszel's website

#artificialintelligence

My 2 month summer internship at Skymind (the company behind the open source deeplearning library DL4J) comes to an end and this is a post to summarize what I have been working on: Building a deep reinforcement learning library for DL4J: … (drums roll) … RL4J! This post begins by an introduction to reinforcement learning and is then followed by a detailed explanation of DQN (Deep Q-Network) for pixel inputs and is concluded by an RL4J example. I will assume from the reader some familiarity with neural networks. But first, lets talk about the core concepts of reinforcement learning. A "simple aspect of science" may be defined as one which, through good fortune, I happen to understand. Reinforcement Learning is an exciting area of machine learning. It is basically the learning of an efficient strategy in a given environment. Informally, this is very similar to Pavlovian conditioning: you assign a reward for a given behavior and over time, the agents learn to reproduce that behavior in order to receive more rewards. It is an iterative trial and error process. Formally, an environment is defined as a Markov Decision Process (MDP). The markov property is to be memoryless.


What Is Computer Vision?

#artificialintelligence

An introduction to the field of computer vision and image recognition, and how Deep Learning is fueling the fire of this hot topic. Computer Vision is an interdisciplinary field that focuses on how machines or computers can emulate the way in which humans' brains and eyes work together to visually process the world around them. Research on Computer Vision can be traced back to beginning in the 1960s. The 1970's saw the foundations of computer vision algorithms used today being made; like the shift from basic digital image processing to focusing on the understanding of the 3D structure of scenes, edge extraction and line-labelling. Over the years, computer vision has developed many applications; 3D imaging, facial recognition, autonomous driving, drone technology and medical diagnostics to name a few.


Apple Reportedly Acquires AI-Based Facial Recognition Startup RealFace

#artificialintelligence

In a bid to boost its prospects in the world of artificial intelligence (AI), Apple has acquired Israel-based startup RealFace that develops deep learning-based face authentication technology, media reported on Monday. Reported by Calcalist, the acquisition is to be worth roughly $2 million (roughly Rs. 13.39 crores). A Times of Israel report cites Startup Nation Central to note RealFace had raised $1 million in funding thus far, employed about 10 people, and had sales operations China, Europe, Israel, and the US. Set up in 2014 by Adi Eckhouse Barzilai and Aviv Mader, RealFace has developed a facial recognition software that offers users a smart biometric login, aiming to make passwords redundant when accessing mobile devices or PCs. The firm's first app - Pickeez - selects the best photos from the user's album.


Deep Learning in Neural Networks: An Overview

#artificialintelligence

What a wonderful treasure trove this paper is! Schmidhuber provides all the background you need to gain an overview of deep learning (as of 2014) and how we got there through the preceding decades. Starting from recent DL results, I tried to trace back the origins of relevant ideas through the past half century and beyond. The main part of the paper runs to 35 pages, and then there are 53 pages of references. Now, I know that many of you think I read a lot of papers – just over 200 a year on this blog – but if I did nothing but review these key works in the development of deep learning it would take me about 4.5 years to get through them at that rate! And when I'd finished I'd still be about 6 years behind the then current state of the art!