neural network


Creepy AI can now create '100 per cent lifelike' human faces from scratch

Daily Mail

Can you tell who is real and who is not? Artificial Intelligence is now able to create lifelike human faces from scratch. Researchers at NVIDIA have been working on creating realistic looking human faces from only a few source photos for years. For many people it's difficult to tell the difference between one of the faces generated below and an actual human face, can you spot which is which? The source image - the top row - are the only legitimate photographs of real people, the rest have been computer generated.


Artificial fly brain can tell who's who

#artificialintelligence

In an interdisciplinary project funded by a Canadian Institute for Advanced Research (CIFAR) Catalyst grant, researchers at the University of Guelph and the University of Toronto, Mississauga combined expertise in fruit fly biology with machine learning to build a biologically-based algorithm that churns through low-resolution videos of fruit flies in order to test whether it is physically possible for a system with such constraints to accomplish such a difficult task. Fruit flies have small compound eyes that take in a limited amount of visual information, an estimated 29 units squared. The traditional view has been that once the image is processed by a fruit fly, it is only able to distinguish very broad features. But a recent discovery that fruit flies can boost their effective resolution with subtle biological tricks has led researchers to believe that vision could contribute significantly to the social lives of flies. This, combined with the discovery that the structure of their visual system looks a lot like a Deep Convolutional Network (DCN), led the team to ask: "can we model a fly brain that can identify individuals?"


How a Fascination With Machinery Led Irina Nicolae to AI Research

#artificialintelligence

Machine learning researcher Irina Nicolae is here to dispel a common misconception: You don't have to be a math whiz to end up working in technology. Growing up in Bucharest, Romania, Irina had relatively little interest in numerics. She was, however, captivated by machinery and how different parts fit together to perform a task. It was this fascination that eventually led her to programming. Today, Irina is turning her longtime passion into action in her role as a research scientist at IBM Research – Ireland.


These faces show how far AI image generation has advanced in just four years

#artificialintelligence

Developments in artificial intelligence move at a startling pace -- so much so that it's often difficult to keep track. But one area where progress is as plain as the nose on your AI-generated face is the use of neural networks to create fake images. In the image above you can see what four years of progress in AI image generation looks like. The crude black-and-white faces on the left are from 2014, published as part of a landmark paper that introduced the AI tool known as the generative adversarial network (GAN). The color faces on the right come from a paper published earlier this month, which uses the same basic method but is clearly a world apart in terms of image quality.


New machine learning algorithm breaks text CAPTCHAs easier than ever

ZDNet

Academics from UK and China have developed a new machine learning algorithm that can break text-based CAPTCHA systems with less effort, faster, and with higher accuracy than all previous methods. This new algorithm -developed by scientists from Lancaster University (UK), Northwest University (China), and Peking University (China)- is based on the concept of GAN, which stands for "Generative Adversarial Network." GANs are a special class of artificial intelligence algorithms that are useful in scenarios where the algorithm doesn't have access to large quantities of training data. Classing machine learning algorithms usually require millions of data points to train the algorithm in performing a task with the desired degree of accuracy. A GAN algorithm has the advantage that it can work with a much smaller batch of initial data points.


Listen to the 'perfect Christmas song' created by AI

Daily Mail

Catchy Christmas songs can now be created by a special songwriting AI, taught by studying existing festive tunes. The system came up with catchy jingles with names like'Syllabub Chocolatebell', 'Peaches Twinkleleaves' and'Cocoa Jollyfluff'. Researchers from Made by AI trained a neural network by inputting one hundred Christmas tunes in the form of Musical Instrument Digital Interface (MIDI) files. It then picked out recurring themes, motifs, instruments and rhythms to generate its own hits. Scientists have trained an AI system to write its own catchy Christmas songs by teaching it existing festive tunes.


Scalable multi-node training with TensorFlow Amazon Web Services

#artificialintelligence

We've heard from customers that scaling TensorFlow training jobs to multiple nodes and GPUs successfully is hard. TensorFlow has distributed training built-in, but it can be difficult to use. Recently, we made optimizations to TensorFlow and Horovod to help AWS customers scale TensorFlow training jobs to multiple nodes and GPUs. With these improvements, any AWS customer can use an AWS Deep Learning AMI to train ResNet-50 on ImageNet in just under 15 minutes. To achieve this, 32 Amazon EC2 instances, each with 8 GPUs, a total 256 GPUs, were harnessed with TensorFlow. All of the required software and tools for this solution ship with the latest Deep Learning AMIs (DLAMIs), so you can try it out yourself. You can train faster, implement your models faster, and get results faster than ever before. This blog post describes our results and shows you how to try out this easier and faster way to run distributed training with TensorFlow. Figure A. ResNet-50 ImageNet model training with the latest optimized TensorFlow with Horovod on a Deep Learning AMI takes 15 minutes on 256 GPUs.


Artificial intelligence, machine learning momentum continues to build

ZDNet

A group of AI experts have published the the Artificial Intelligence 2018 annual report, detailing the growth in AI academic research, use by industry, mentions by government, patents and technical performance in computer vision and natural language processing. While the first report last year focused on North American AI activities, this year's report includes efforts in Europe, China, South Korea and Japan. One measure of AI activity across regions was by output of academic papers. On this count, Europe was leading, accounting for 28 percent of AI papers last year, followed by China, which accounted for 25 percent, and the US with 17 percent. The most widely covered topics were machine learning and probabilistic learning, neural networks, and computer vision.


Artificial Intelligence Composes New Christmas Songs

#artificialintelligence

One of the most common uses of neural networks is the generation of new content, given certain constraints. A neural network is created, then trained on source content – ideally with as much reference material as possible. Then, the model is asked to generate original content in the same vein. This generally has mixed, but occasionally amusing, results. The team at [Made by AI] had a go at generating Christmas songs using this very technique.


Introduction to Regularization to Reduce Overfitting of Deep Learning Neural Networks

#artificialintelligence

The objective of a neural network is to have a final model that performs well both on the data that we used to train it (e.g. the training dataset) and the new data on which the model will be used to make predictions. The central challenge in machine learning is that we must perform well on new, previously unseen inputs -- not just those on which our model was trained. The ability to perform well on previously unobserved inputs is called generalization.