"Many researchers … speculate that the information-processing abilities of biological neural systems must follow from highly parallel processes operating on representations that are distributed over many neurons. [Artificial neural networks] capture this kind of highly parallel computation based on distributed representations"
– from Machine Learning (Section 4.1.1; page 82) by Tom M. Mitchell, McGraw Hill Companies, Inc. (1997).
There are so many tools, platforms and resources available, MLEs can focus their time on solving problems critical to their field or company instead of worrying about building platforms and hand rolling numerical algorithms. Google Cloud has easy means of building and deploying TensorFlow models including their new TPU support in beta, AWS has an ever evolving suite of deep learning AMIs and Nvidia has a great deep learning SDK. In parallel, Apple's coreML and Android's NN API make is simpler and faster to deploy models on phones; this will continue to push the boundary for developing and releasing ML apps. With all of the above, there is healthy competition among big players in the cloud space pushing the whole ecosystem forward. And yet, most of them are finding ways to collaborate towards open standards like ONNX.
A quiet wager has taken hold among researchers who study artificial intelligence techniques and the societal impacts of such technologies. They're betting whether or not someone will create a so-called Deepfake video about a political candidate that receives more than 2 million views before getting debunked by the end of 2018. The actual stakes in the bet are fairly small: Manhattan cocktails as a reward for the "yes" camp and tropical tiki drinks for the "no" camp. But the implications of the technology behind the bet's premise could potentially reshape governments and undermine societal trust in the idea of having shared facts. It all comes down to when the technology may mature enough to digitally create fake but believable videos of politicians and celebrities saying or doing things that never actually happened in real life.
When it comes to groundbreaking research, there are two fields that seem to occupy the newscycle: carbon nanotubes and artificial intelligence. The potential combination of those two fields of study seems like it could radically change the word as we know it, or, as South Korean scientists have discovered, at least change how we type. The carbon atom, one of the building blocks of life, gains radical new abilities when assembled into long, thin chains, known as carbon nanotubes. Think ultra-flexible films that are better at stopping bullets than kevlar vests, or bio-engineered plants that can detect land mines and explosives. And AI, trained using deep learning techniques, is soon going to make it almost impossible to discern fake videos from real ones.
Toronto is a thriving hub for AI experts, thanks in part to foundational work out of the University of Toronto and government-supported research organizations like the Vector Institute. We're tapping further into this expertise by investing in a new AI research lab -- led by leading computer scientist Sanja Fidler -- that will become the focal point of our presence in the city. NVIDIA's Toronto office opened in 2015, leveraging our acquisition of TransGaming, a game-technology company, and currently numbers about 50. With the new lab, our goal is to triple the number of AI and deep learning researchers working there by year's end. It will be a state-of-art facility for AI talent to work in and will expand the footprint of our office by about half to accommodate the influx of talent.
What if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000. It's a multibillion-dollar opportunity that can help billions. The worldwide pharmaceutical market, one of the slowest monolithic industries to adapt, surpassed $1.1 trillion in 2016. In 2018, the top 10 pharmaceutical companies alone are projected to generate over $355 billion in revenue.
Anyone who has lived through the 1980s knows how maddeningly difficult it is to solve a Rubik's Cube, and to accomplish the feat without peeling the stickers off and rearranging them. Apparently the six-sided contraption presents a special kind of challenge to modern deep learning techniques that makes it more difficult than, say, learning to play chess or Go. That used to be the case, anyway. Researchers from the University of California, Irvine, have developed a new deep learning technique that can teach itself to solve the Rubik's Cube. What they come up with is very different than an algorithm designed to solve the toy from any position.
Video: AMD and Intel: Frenemies aligned vs Nvidia. Nvidia CEO Jensen Huang has unveiled a new souped-up variant of its $3,000 Titan V GPU, which the company launched last year and billed as the most powerful PC GPU ever. Huang unveiled the'Titan V CEO Edition' at the Computer Vision and Pattern Recognition conference in Salt Lake City, Utah, where he gave away 20 of the cards to AI researchers working on robotics and autonomous driving projects. And for now, these are the only people in the world who can get their hands on this limited edition model. The Titan V is Nvidia's most powerful PC GPU, but while gamers may drool over its power, the $3,000 board is aimed primarily at researchers and scientists.
This is the second article on my series introducing machine learning concepts with while stepping very lightly on mathematics. If you missed previous article you can find in here (on KL divergence). Fun fact, I'm going to make this an interesting adventure by introducing some machine learning concept for every letter in the alphabet (This would be for the letter C). Convolution neural networks (CNNs) are a family of deep networks that can exploit the spatial structure of data (e.g. Think of a problem where we want to identify if there is a person in a given image.
What could you or I do without Google's legions of ace AI programmers and racks of neural network training hardware? Let's look at the ways we can make a natural language bot of our own. As you'll see, it's entirely doable. One of the first steps in engineering a solution is to break it down into smaller steps. Any conversation consists of a back-and-forth between two people, or a person and a chunk of silicon in our case.