"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
Rats use brain cells called grid cells to help them navigate, and this ability has been recreated by an AI program.Credit: Al Fenn/LIFE Coll./Getty Scientists have used artificial intelligence (AI) to recreate the complex neural codes that the brain uses to navigate through space. The feat demonstrates how powerful AI algorithms can assist conventional neuroscience research to test theories about the brain's workings -- but the approach is not going to put neuroscientists out of work just yet, say the researchers. The computer program, details of which were published in Nature on 9 May1, was developed by neuroscientists at University College London (UCL) and AI researchers at the London-based Google company DeepMind. It used a technique called deep learning -- a type of AI inspired by the structures in the brain -- to train a computer-simulated rat to track its position in a virtual environment.
Since complex diseases such as cancer, diabetes and so on pose a very big threat to human health, they have been extensively studied in the past decades1. However, the underlying pathogenesis of complex diseases is still not clearly known. With the rapid development of genomics technologies, the big data of variations on DNA level such as SNP and CNV (copy number variation) allow comprehensive characterization of complex diseases and provide potential biomarkers to predict the status of complex diseases. Due to the'missing heritability' and lack of reproducibility, the exploration of relationships between SNPs and complex diseases have been transferred from single variation to biomarkers interactions which are defined as epistasis2. First, as the number of variants increases, the combination space expands exponentially, resulting in the'curse of dimensionality' problem.
Deep learning is a part of AI and machine learning that is "based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, semi-supervised or unsupervised," according to Wikipedia. Deep Learning, rather than following rigid hierarchies, is modeled on the neurons of the brain. Are our systems ready to learn? In a world that is just getting started with AI, deep learning is another leap in sophistication.
The ability to take one person's face or expression and superimpose it onto a video of another person has recently become possible. In particular, pornographic videos called "deepfakes" have emerged on websites such as Reddit and 4Chan showing famous individuals' faces superimposed onto the bodies of actors. This phenomenon has significant implications. At the very least, it has the potential to undermine the reputation of people who are victims of this kind of forgery. It poses problems for biometric ID systems.
If conventional psychology isn't up to the task, perhaps we should step back and consider a tantalizing sci-fi alternative -- that Trump doesn't operate within conventional human cognitive constraints, but rather is a new life form, a rudimentary artificial intelligence-based learning machine. When we strip away all moral, ethical and ideological considerations from his decisions and see them strictly in the light of machine learning, his behavior makes perfect sense. Consider how deep learning occurs in neural networks such as Google's Deep Mind or IBM's Deep Blue and Watson. The goal of DNA is self-reproduction; the sole intent of Deep Mind or Watson is to win.
A look under the hood of any major search, commerce, or social-networking site today will reveal a profusion of "deep-learning" algorithms. Over the past decade, these powerful artificial intelligence (AI) tools have been increasingly and successfully applied to image analysis, speech recognition, translation, and many other tasks. Indeed, the computational and power requirements of these algorithms now constitute a major and still-growing fraction of datacenter demand. Designers often offload much of the highly parallel calculations to commercial hardware, especially graphics-processing units (GPUs) originally developed for rapid image rendering. These chips are especially well-suited to the computationally intensive "training" phase, which tunes system parameters using many validated examples.
Artificial Intelligence (AI) is solving problems that seemed well beyond our reach just a few years back. Using deep learning, the fastest growing segment of AI, computers are now able to learn and recognize patterns from data that were considered too complex for expert written software. Today, deep learning is transforming every industry, including automotive, healthcare, retail and financial services. Enterprises, and their leaders, looking to get started should first get familiar with the fundamentals of deep learning, and as well as understand the current challenges and how to address them. This crash course provides a starting point, as well as practical guidance on next steps.
The difference between the two pictures is that the one on the right has been tweaked a bit by an algorithm to make it difficult for a type of computer model called a convolutional neural network (CNN) to be able to tell what it really is. In this case, the CNN think it's looking at a dog rather than a cat, but what's remarkable is that most people think the same thing. This is an example of what's called an adversarial image: an image specifically designed to fool neural networks into making an incorrect determination about what they're looking at. Researchers at Google Brain decided to try and figure out whether the same techniques that fool artificial neural networks can also fool the biological neural networks inside of our heads, by developing adversarial images capable of making both computers and humans think that they're looking at something they aren't. Visual classification algorithms powered by convolutional neural networks are commonly used to recognize objects in images.