If you are looking for an answer to the question What is Artificial Intelligence? and you only have a minute, then here's the definition the Association for the Advancement of Artificial Intelligence offers on its home page: "the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines."
However, if you are fortunate enough to have more than a minute, then please get ready to embark upon an exciting journey exploring AI (but beware, it could last a lifetime) …
Generative models create data similiar to what they trained on Training these models is very hard One approach is using Generative Adversarial Networks (GANs) Facebook's Yann LeCun considers them "the most interesting idea in the last 10 years in ML" What are the differences between Discrimitive and Generative models? A discriminative model learns a function that maps the input data (x) to some desired output class label (y). In probabilistic terms, they directly learn the conditional distribution P(y x) A generative model tries to learn the joint probability of the input data and labels simultaneously, i.e. Facebook's Yann LeCun considers them "the most interesting idea in the last 10 years in ML" What are the differences between Discrimitive and Generative models? A discriminative model learns a function that maps the input data (x) to some desired output class label (y).
Recent methods in artificial intelligence enable AI software to produce rich and creative digital artifacts such as text and images painted from scratch. One technique used in creating these artifacts are generative adversarial networks (GANs). Generative adversarial networks are a recent breakthrough in machine learning. Initially proposed by Ian Goodfellow and colleagues at the University of Montreal at NIPS 2014, the GAN approach enables the specification and training of rich probabilistic deep learning models using standard deep learning technology. Allowing for flexible probabilistic models is important in order to capture rich phenomena present in complex data.
Last week, the credit reporting agency Equifax announced that malicious hackers had leaked the personal information of 143 million people in their system. That's reason for concern, of course, but if a hacker wants to access your online data by simply guessing your password, you're probably toast in less than an hour. Now, there's more bad news: Scientists have harnessed the power of artificial intelligence (AI) to create a program that, combined with existing tools, figured more than a quarter of the passwords from a set of more than 43 million LinkedIn profiles. Yet the researchers say the technology may also be used to beat baddies at their own game. The work could help average users and companies measure the strength of passwords, says Thomas Ristenpart, a computer scientist who studies computer security at Cornell Tech in New York City but was not involved with the study.
There is a questionable assumption that is prevalent that Deep Learning is a form of probabilistic or statistical induction. We see this in DARPA's presentation of the 3 waves of AI. Statistical Learning -- Where programmers create statistical models for specific problem domains and train them on big data. This is a broad category that includes Bayesian methods, template based methods (i.e. SVM), tree based predictors, mathematical programming and Deep Learning.
Generative Adversarial Networks (GAN) is one of the most promising recent developments in Deep Learning. GAN, introduced by Ian Goodfellow in 2014, attacks the problem of unsupervised learning by training two deep networks, called Generator and Discriminator, that compete and cooperate with each other. In the course of training, both networks eventually learn how to perform their tasks. GAN is almost always explained like the case of a counterfeiter (Generative) and the police (Discriminator). Initially, the counterfeiter will show the police a fake money.
Artificial intelligence has profound implications for society, and for the data centers that will power it. The rapid growth of AI is contributing to the building of new services, as well as enhancing products already on the market. And the growing popularity of machine learning as a business is also boosting demand for powerful high performance computing hardware. The emergence of AI is a key theme here at Data Center Frontier. The rise of AI applications will drive demand for data center space, and have design implications for how high-density racks are powered and cooled.
In Part I the original GAN paper was presented. Part II gave an overview of DCGAN, which greatly improved the performance and stability of GANs. In this final part, the contributions of InfoGAN will be explored, which apply concepts from Information Theory to transform some of the noise terms into latent codes that have systematic, predictable effects on the outcome. As seen in the examples of Part II, one can do interesting and impressive things when doing arithmetic on the noise vector of the generator. In the example below from the DCGAN paper, the input noise vectors of men with glasses are manipulated to give vectors that result in women with sunglasses once fed into the generator.
StarGAN can flexibly translate an input image to any desired target domain using only a single generator and a discriminator. The images are generated by StarGAN trained on the CelebA dataset. The images are generated by StarGAN trained on the RaFD dataset. The images are generated by StarGAN trained on both the CelebA and RaFD dataset. Overview of StarGAN, consisting of two modules, a discriminator D and a generator G. (a) D learns to distinguish between real and fake images and classify the real images to its corresponding domain.
Since I found out about generative adversarial networks (GANs), I've been fascinated by them. A GAN is a type of neural network that is able to generate new data from scratch. You can feed it a little bit of random noise as input, and it can produce realistic images of bedrooms, or birds, or whatever it is trained to generate. One thing all scientists can agree on is that we need more data. GANs, which can be used to produce new data in data-limited situations, can prove to be really useful.
The purpose of this article series is to provide an overview of GAN research and explain the nature of the contributions. I'm new to this area myself, so this will surely be incomplete, but hopefully it can provide some quick context to other newbies. For Part I we'll introduce GANs at a high level and summarize the original paper. Feel free to skip to Part II if you're already familiar with the basics. It's assumed you're familiar with the basics of neural networks.