Deep Learning


What is it like to be a machine learning engineer in 2018?

#artificialintelligence

There are so many tools, platforms and resources available, MLEs can focus their time on solving problems critical to their field or company instead of worrying about building platforms and hand rolling numerical algorithms. Google Cloud has easy means of building and deploying TensorFlow models including their new TPU support in beta, AWS has an ever evolving suite of deep learning AMIs and Nvidia has a great deep learning SDK. In parallel, Apple's coreML and Android's NN API make is simpler and faster to deploy models on phones; this will continue to push the boundary for developing and releasing ML apps. With all of the above, there is healthy competition among big players in the cloud space pushing the whole ecosystem forward. And yet, most of them are finding ways to collaborate towards open standards like ONNX.


Experts Bet on First Deepfakes Political Scandal

IEEE Spectrum Robotics Channel

A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They're betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018. The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the "yes" camp and tropical tiki drinks for the "no" camp. But the implications of the technology behind the bet's premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.


AI and Carbon Nanotubes Are Now Being Used to Improve the World's... Keyboards?

#artificialintelligence

When it comes to groundbreaking research, there are two fields that seem to occupy the newscycle: carbon nanotubes and artificial intelligence. The potential combination of those two fields of study seems like it could radically change the word as we know it, or, as South Korean scientists have discovered, at least change how we type. The carbon atom, one of the building blocks of life, gains radical new abilities when assembled into long, thin chains, known as carbon nanotubes. Think ultra-flexible films that are better at stopping bullets than kevlar vests, or bio-engineered plants that can detect land mines and explosives. And AI, trained using deep learning techniques, is soon going to make it almost impossible to discern fake videos from real ones.


NVIDIA Opening AI Research Lab in Toronto NVIDIA Blogs

#artificialintelligence

Toronto is a thriving hub for AI experts, thanks in part to foundational work out of the University of Toronto and government-supported research organizations like the Vector Institute. We're tapping further into this expertise by investing in a new AI research lab -- led by leading computer scientist Sanja Fidler -- that will become the focal point of our presence in the city. NVIDIA's Toronto office opened in 2015, leveraging our acquisition of TransGaming, a game-technology company, and currently numbers about 50. With the new lab, our goal is to triple the number of AI and deep learning researchers working there by year's end. It will be a state-of-art facility for AI talent to work in and will expand the footprint of our office by about half to accommodate the influx of talent.


UC Irvine Deep Learning Machine Teaches Itself To Solve A Rubik's Cube

#artificialintelligence

Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they come up with is very different than an algorithm designed to solve the toy from any position.


Light on Math Machine Learning: Intuitive Guide to Convolution Neural Networks

#artificialintelligence

This is the second article on my series introducing machine learning concepts with while stepping very lightly on mathematics. If you missed previous article you can find in here (on KL divergence). Fun fact, I'm going to make this an interesting adventure by introducing some machine learning concept for every letter in the alphabet (This would be for the letter C). Convolution neural networks (CNNs) are a family of deep networks that can exploit the spatial structure of data (e.g. Think of a problem where we want to identify if there is a person in a given image.


Pediatric Bone Age Assessment Using Deep Convolutional Neural Networks

#artificialintelligence

Skeletal bone age assessment is a common clinical practice to diagnose endocrine and metabolic disorders in child development. In this paper, we describe a fully automated deep learning approach to the problem of bone age assessment using data from the 2017 Pediatric Bone Age Challenge organized by the Radiological Society of North America. The dataset for this competition is consisted of 12.6k radiological images. Each radiograph in this dataset is an image of a left hand labeled by the bone age and the sex of a patient. Our approach utilizes several deep neural network architectures trained end-to-end.


Generative Adversarial Networks -- Explained – Towards Data Science

#artificialintelligence

Deep learning has changed the way we work, compute and has made our lives a lot easier. As Andrej Karpathy mentioned it is indeed the software 2.0, as we have taught machines to figure things out themselves. There are many existing deep learning techniques which can be ascribed to its prolific success. But no major impact has been created by deep generative models, which is due to their inability to approximate intractable probabilistic computations. Ian Goodfellow was able to find a solution that could sidestep these difficulties faced by generative models and created a new ingenious model called Generative Adversarial Networks.


Automatic Writing With Deep Learning - DZone AI

#artificialintelligence

X is some sort of an object, e.g. an email text, an image, or a document. Y is either a single class label from a finite set of labels, like spam/no spam, detected object or a cluster name for this document, or some number, like salary, in the next month or stock price. While such tasks can be daunting to solve (like sentiment analysis or predicting stock prices in real-time), they require rather clear steps to achieve good levels of mapping accuracy. Again, I'm not discussing situations with lack of training data to cover the modeled phenomenon or poor feature selection. In contrast, somewhat less straightforward areas of AI are the tasks that present you with a challenge of predicting as fuzzy structures as words, sentences, or complete texts.


NVIDIAVoice: Why Did NVIDIA Build The World's Largest GPU?

Forbes Technology

At some point in the not too distant future, the answer will seem self-evident. Like a lot of things, time changes perspective; the essential advancements we take for granted now were once deemed insurmountable. I believe we'll look back at the introduction of a 2 petaFLOPS deep learning system as essential to the evolution of AI in the enterprise. Single GPU systems once offered a seemingly limitless playground for researchers and developers on which to innovate. As deep learning model complexity and datasets grew to address increasingly exotic (but important) use cases, the standard currency of deep learning compute grew in response.