If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Even more concerning, researchers have shown that completely random nonsense images can be misclassified by CNNs with very high confidence as objects recognizable to humans, even though a human would clearly recognize that there was no image there at all (e.g. If those system observations are intentionally tainted with noise designed to defeat the CNN recognition, the system will be trained to make incorrect conclusions about whether a malevolent intrusion is occurring. Adversarial Machine Learning is an emerging area in deep neural net (DNN) research. The current state of AI has advanced to general image, text, and speech recognition, and tasks like steering the car or winning a game of chess.
My first recollection of an effective Deep Learning system that used feedback loops where in "Ladder Networks". In an architecture developed by Stanford called "Feedback Networks", the researchers explored a different kind of network that feeds back into itself and develops the internal representation incrementally: In an even more recently published research (March 2017) from UC Berkeley have created astonishingly capable image to image translations using GANs and a novel kind of regularization. The major difficulty of training Deep Learning systems has been the lack of labeled data. So the next time you see some mind boggling Deep Learning results, seek to find the strange loops that are embedded in the method.
It's New Year's 2017, so time to make predictions. Portfolio diversification has never been me, so I'll make just one. Generative Adversarial Networks -- GANs for short -- will be the next big thing in deep learning, and GANs will change the way we look at the world. Specifically, adversarial training will change how we think about teaching AIs complex tasks. In a sense, it they are learning how to imitate an expert.
A superhero who was able to see two seconds into the future wouldn't be invincible, but she'd have a leg up on mere mortals. On Monday, the Massachusetts Institute of Technology announced its new artificial intelligence, and it's a prototype of such a being. Based on a photograph alone, it can predict what'll happen next, then spit out a one-and-a-half second video clip depicting that possible future. The breakthrough could yield smarter autonomous cars or security systems. MIT researchers trained the A.I. by feeding over two million videos into its two-pronged deep-learning system.
Starting this week, I'll be doing a new series called Deep Learning Research Review. Every couple weeks or so, I'll be summarizing and explaining research papers in specific subfields of deep learning. This week I'll begin with Generative Adversarial Networks. According to Yann LeCun, "adversarial training is the coolest thing since sliced bread". I'm inclined to believe so because I don't think sliced bread ever created this much buzz and excitement within the deep learning community.
In a blog posted Monday, for example, Facebook director of AI Yann LeCun and research engineer Soumith Chintala describe efforts at unsupervised machine learning through a technique called adversarial training. LeCun thinks such predictive abilities could enhance Facebook's ability to engage users, using the common sense the site has developed to essentially make educated guesses about them. Improved predictive capabilities could likewise help improve the Facebook M virtual assistant, which faces growing competition from Apple's Siri, Google's upcoming Google Assistant, Amazon's Alexa and Microsoft's Cortana. Bengio, who was not involved in the Facebook AI research, addressed deep learning's progress in the June 2016 Scientific American article titled "Machines Who Learn."
A detailed article on a project to use artificial neural networks to build films, by training them on individual frames, and then getting them to reconstruct missing bits. "In the past 12 months, interest in--and the development of -- using artificial neural networks for the generation of text, images and sound has exploded. In particular, methods for the generation of images have advanced remarkably in recent months. In November 2015, Radford et al. blew away the machine learning community with an approach of using a deep neural network to generate realistic images of bedrooms and faces using an adversarial training method in which a generator network generates random samples, and a discriminator network tries to determine which images are generated and which are real. Over time the generator becomes very good at producing realistic images that can fool the discriminator.
In a previous post, we've looked at a generative algorithm that can produce images of digits at arbitrary high resolutions, while training on on a set of low resolution images, such as MNIST or CIFAR-10. This post explores several changes to the previous model to produce more interesting results. Specifically, we removed the use of pixel-by-pixel reconstruction loss in the Variation Autoencoder. The discriminator network used to detect fake images is replaced by a classifier network. The generator network used previously had been a relatively large network consisting of 4 layers of 128 fully connected nodes, and we explore replacing this network with a much deeper network of 96 layers, but only with only 6 nodes in each layer.
In the past 12 months, interest in--and the development of -- using artificial neural networks for the generation of text, images and sound has exploded. In particular, methods for the generation of images have advanced remarkably in recent months. In November 2015, Radford et al. blew away the machine learning community with an approach of using a deep neural network to generate realistic images of bedrooms and faces using an adversarial training method in which a generator network generates random samples, and a discriminator network tries to determine which images are generated and which are real. Over time the generator becomes very good at producing realistic images that can fool the discriminator. The adversarial method was first proposed by Goodfellow et al. in 2013, but until Radford et al.'s paper, it hadn't been possible to generate coherent and realistic natural images using neural nets.
Generator and Discriminator consist of Deconvolutional Network (DNN) and Convolutional Neural Network (CNN). CNN is a neural network which encodes the hundreds of pixels of an image into a vector of small dimensions (z) which is a summary of the image. When a real image is given, Discriminator should output 1 or 0 for whether the image was generated from Generator. In the contrast, Generator generates an image from z, which follows a Gaussian Distribution, and tries to figure out the distribution of human images from z.