If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Artificial intelligence has been shaking up the marketing world for the last few years, helping automate menial, repetitive tasks, inform better creative decisions, and predict revenue projections. But many of the tools available for that latter category, namely, predictive analytics, have a less-than-stellar accuracy rate. South Africa-based Xineoh has developed a platform for predicting customer behavior with AI. The company claims its technolgoy is more accurate than any other solution available. Xineoh was founded in 2010 and raised $2 million from U.S. and Canadian investors in June 2017.
I was introduced to deep learning as part of Udacity's Self-Driving Car Nanodegree (SDCND) program, which I started in November. Some of our projects required building deep neural networks for tasks such as classifying traffics signs, and using behavior cloning to train a car to drive autonomously in a simulator. However, my MacBook Pro was not up to the task of training neural networks. I used AWS for my first deep learning project, and while it's a viable option, I decided to build my own machine for greater flexibility and convenience. I also plan to do a lot of deep learning outside of the nanodegree, such as Kaggle competitions and side projects, so it should end up being the more cost effective option as well.
Call it an a-MAZE-ing development: A U.K.-based team of researchers has developed an artificial intelligence program that can learn to take shortcuts through a labyrinth to reach its goal. In the process, the program developed structures akin to those in the human brain. The emergence of these computational "grid cells," described in the journal Nature, could help scientists design better navigational software for future robots and even offer a new window through which to probe the mysteries of the mammalian brain. In recent years, AI researchers have developed and fine-tuned deep-learning networks -- layered programs that can come up with novel solutions to achieve their assigned goal. For example, a deep-learning network can be told which face to identify in a series of different photos, and through several rounds of training, can tune its algorithms until it spots the right face virtually every time.
There are lots of small best practices, ranging from simple tricks like initializing weights, regularization to slightly complex techniques like cyclic learning rates that can make training and debugging neural nets easier and efficient. This inspired me to write this series of blogs where I will cover as many nuances as I can to make implementing deep learning simpler for you. While writing this blog, the assumption is that you have a basic idea of how neural networks are trained. An understanding of weights, biases, hidden layers, activations and activation functions will make the content clearer. I would recommend this course if you wish to build a basic foundation of deep learning.
An offspring of the Google team, Tensorflow is one of the most advanced Python frameworks for machine learning that implements deep machine learning algorithms. It is a second-generation, open-source system, the predecessor of which was the less integral recognition solution DistBelief. Despite its high learning curve, the product can nevertheless provide developers with a number of capabilities (alternatively, you can choose from other popular machine learning frameworks with steeper learning curves, like Theano). In particular, Tensorflow features tools that allow executing the input data analysis both with the help of encyclopedic data and the data previously analyzed during the interaction with certain users (supervisors). Although Tensorflow's final results are characterized by a high level of precision, developers usually prefer not to use it in scientific software development.
This 7-week course is designed for anyone with at least a year of coding experience, and some memory of high-school math. You will start with step one--learning how to get a GPU server online suitable for deep learning--and go all the way through to creating state of the art, highly practical, models for computer vision, natural language processing, and recommendation systems. There are around 20 hours of lessons, and you should plan to spend around 10 hours a week for 7 weeks to complete the material. The course is based on lessons recorded during the first certificate course at The Data Institute at USF.
AI knows when you're going to die. But unlike in sci-fi movies, that information could end up saving lives. A new paper published in Nature suggests that feeding electronic health record data to a deep learning model could substantially improve the accuracy of projected outcomes. In trials using data from two U.S. hospitals, researchers were able to show that these algorithms could predict a patient's length of stay and time of discharge, but also the time of death. The neural network described in the study uses an immense amount of data, such as a patient's vitals and medical history, to make its predictions.
Nvidia wants to help you make awesome slow-mo videos. Nvidia wants to help you turn any old video shot on your phone into a blur-free, slow-motion masterpiece, and it's using artificial intelligence to do it. Researchers at the company have developed a new deep-learning system that can convert standard video into slow-mo by adding additional frames after the video has been shot. The result would turn a video shot at 30 frames per second (standard for a phone shooting a regular video) into something that appears as a 240 fps video. To create the slow-mo AI, researchers used 11,000 videos of sport and everyday activities shot at 240 fps to train a neural network, which learned to predict the extra frames.
For the past five years, the hottest thing in artificial intelligence has been a branch known as deep learning. The grandly named statistical technique, put simply, gives computers a way to learn by processing vast amounts of data. Thanks to deep learning, computers can easily identify faces and recognize spoken words, making other forms of humanlike intelligence suddenly seem within reach. Companies like Google, Facebook and Microsoft have poured money into deep learning. And the technology's perception and pattern-matching abilities are being applied to improve progress in fields such as drug discovery and self-driving cars.
This is a collection of some of my natural language processing (NLP) posts from the past year or so. They start from zero and progress accordingly, and are suitable for individuals looking to creep toward NLP and pick up some of the basic ideas, before hopefully branching out further (see the final 2 resources listed below for more on that). Not originally intended to be in any particular order, if you are inclined to read them all, they are best approached in the order they are presented. At the intersection of computational linguistics and artificial intelligence is where we find natural language processing. Very broadly, natural language processing (NLP) is a discipline which is interested in how human languages, and, to some extent, the humans who speak them, interact with technology.