Goto

Collaborating Authors

Forget The Future, AI Will Take Us Back To The Past

#artificialintelligence

AI is the future, and it's also the past. Not just in the sense of having been developed in previous years and decades, but also in the sense of being capable of recreating human history. This power was highlighted vividly by a study published at the end of August by researchers from University College London and Duke University, who managed to use artificial intelligence to create separate representations of two images that had been painted on both sides of a single panel. More specifically, they used X-ray imaging techniques to produce a combined representation of the outer panels of the famous 15th Century Ghent Altarpiece painting. Because the resulting image was a combination of two images superimposed on each other, it was previously hard to analyze.


A Deep Learning Tutorial: From Perceptrons to Deep Networks

#artificialintelligence

This setting is incredibly general: your data could be symptoms and your labels illnesses; or your data could be images of handwritten characters and your labels the actual characters they represent. One of the earliest supervised training algorithms is that of the perceptron, a basic neural network building block. Say we have n points in the plane, labeled '0' and '1'. We're given a new point and we want to guess its label (this is akin to the "Dog" and "Not dog" scenario above).


Domain Separation Networks

Neural Information Processing Systems

The cost of large scale data collection and annotation often makes the application of machine learning algorithms to new tasks or datasets prohibitively expensive. One approach circumventing this cost is training models on synthetic data where annotations are provided automatically. Despite their appeal, such models often fail to generalize from synthetic to real images, necessitating domain adaptation algorithms to manipulate these models before they can be successfully applied. Existing approaches focus either on mapping representations from one domain to the other, or on learning to extract features that are invariant to the domain from which they were extracted. However, by focusing only on creating a mapping or shared representation between the two domains, they ignore the individual characteristics of each domain.


IEC blog » What's the difference between machine learning and deep learning?

#artificialintelligence

Artificial intelligence refers to a variety of software and hardware technologies that can be applied in numerous ways for different applications. The terms'machine learning' and'deep learning' are often used interchangeably in the media, but they are not the same thing. In machine learning, the machine builds up the knowledge to complete specific actions based on training data covering multiple datasets. There are many examples of machine learning in our daily lives. The performance of machine learning algorithms is directly related to the available information, which is referred to as'representation'.


Technical Perspective: What Led Computer Vision to Deep Learning?

Communications of the ACM

We are in the middle of the third wave of interest in artificial neural networks as the leading paradigm for machine learning. The following paper by Krizhevksy, Sutskever and Hinton (henceforth KSH) is the paper most responsible for this third wave. The current wave has been called "deep learning" because of the emphasis on having multiple layers of neurons between the input and the output of the neural network; the main architectural design features, however, remain the same as in the second wave, the 1980s. Central to that era was the publication of the back-propagation algorithm for training multilayer perceptrons by Rumelhart, Hinton and Williams.7 This algorithm, a consequence of the chain rule of calculus, had been noted before, for example, by Werbos.8