Deep Learning


Novel synaptic architecture for brain inspired computing

#artificialintelligence

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. They were published last week in a paper in the journal Nature Communications. The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications. Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks -- mathematical models of the neurons and synapses of the brain -- that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.


A Quick History of Modern Robotics

#artificialintelligence

General Motors deployed the first mechanical-arm robot to operate one of its assembly lines as early as 1959. Since that time, robots have been employed to perform numerous manufacturing tasks such as welding, riveting, and painting. This first generation of robots was inflexible, could not respond simply to errors, and required individual programming specific to the tasks they were designed to perform. These robots were governed and inspired by logic--a series of programs coded into their operating systems. Now, the next wave of intelligent robotics is taking advantage of a different kind of learning, predicated on experience rather than logical instruction, to learn how to perform tasks in much the same way that a child would.


GPU computing: Accelerating the deep learning curve

ZDNet

Artificial intelligence (AI) may be what everyone's talking about, but getting involved isn't straightforward. You'll need a more than decent grasp of maths and theoretical data science, plus an understanding of neural networks and deep learning fundamentals -- not to mention a good working knowledge of the tools required to turn those theories into practical models and applications. You'll also need an abundance of processing power -- beyond that required by even the most demanding of standard applications. One way to get this is via the cloud but, because deep learning models can take days or even weeks to come up with the goods, that can be hugely expensive. In this article, therefore, we'll look at on-premises alternatives and why the once-humble graphics controller is now the must-have accessory for the would-be AI developer.


Deep Learning on the Edge – Towards Data Science

#artificialintelligence

Scalable Deep Learning services are contingent on several constraints. Depending on your target application, you may require low latency, enhanced security or long-term cost effectiveness. Hosting your Deep Learning model on the cloud may not be the best solution in such cases. Computing on the edge alleviates the above issues, and provides other benefits. Edge here refers to the computation that is performed locally on the consumer's products.


On Neural Networks

Communications of the ACM

I am only a layman in the neural network space so the ideas and opinions in this column are sure to be refined by comments from more knowledgeable readers. The recent successes of multilayer neural networks have made headlines. Much earlier work on what I imagine to be single-layer networks proved to have limitations. Indeed, the famous book, Perceptrons,a by Turing laureate Marvin Minsky and his colleague Seymour Papert put the kibosh (that's a technical term) on further research in this space for some time. Among the most visible signs of advancement in this arena is the success of the DeepMind AlphaGo multilayer neural network that beat the international grand Go champion, Lee Sedol, four games out of five in March 2016 in Seoul.b


Julia – A Fresh Approach to Numerical Computing

#artificialintelligence

This post is authored by Viral B. Shah, co-creator of the Julia language and co-founder and CEO at Julia Computing, and Avik Sengupta, head of engineering at Julia Computing. The Julia language provides a fresh new approach to numerical computing, where there is no longer a compromise between performance and productivity. A high-level language that makes writing natural mathematical code easy, with runtime speeds approaching raw C, Julia has been used to model economic systems at the Federal Reserve, drive autonomous cars at University of California Berkeley, optimize the power grid, calculate solvency requirements for large insurance firms, model the US mortgage markets and map all the stars in the sky. It would be no surprise then that Julia is a natural fit in many areas of machine learning. And the powers of Julia make it a perfect language to implement these algorithms.


AI Drone Learns to Detect Brawls

IEEE Spectrum Robotics Channel

Drones armed with computer vision software could enable new forms of automated skyborne surveillance to watch for violence below. One glimpse of that future comes from UK and Indian researchers who demonstrated a drone surveillance system that can automatically detect small groups of people fighting each other. The seed idea for researchers to develop such a drone surveillance system was first planted in the wake of the Boston Marathon bombing that killed three and injured hundreds in 2013. It was not until the Manchester Arena bombing that killed 23 and wounded 139--including many children leaving an Ariana Grande concert--when the researchers made some progress. This time, they harnessed a form of the popular artificial intelligence technique known as deep learning.


IBM And NVIDIA Reach The Summit: The World's Fastest Supercomputer

Forbes Technology

IBM, NVIDIA, and the U.S. Department of Energy (DOE) recently announced that they have completed testing the world's fastest supercomputer, Summit, at the Oak Ridge National Laboratory in Oak Ridge, Tennessee. Capable of over 200 petaflops (200 quadrillion operations per second), Summit consists of 4600 IBM dual socket Power 9 nodes, connected by over 185 miles of fiber optic cabling. Each node is equipped with 6 NVIDIA Volta TensorCore GPUs, delivering total throughput that is 8 times faster than its predecessor, Titan, for double precision tasks, and 100 times faster for reduced precision tasks common in deep learning and AI. China has held the top spot in the Top 500 for the last 5 years, so this brings the virtual HPC crown home to the USA. Figure 1: The Summit Supercomputer at the Department of Energy's Oak Ridge National Labs is now the fastest computer in the world. Some of the specifications are truly amazing; the system exchanges water at the rate of 9 Olympic pools per day for cooling, and as an AI supercomputer, Summit has already achieved (limited) "exascale" status, delivering 3 exaflops of AI precision performance.


Why This Startup Created A Deep Learning Chip For Autonomous Vehicles

Forbes Technology

HANOVER, GERMANY - APRIL 25: Close up of the digital display while a camera and radar system assists as artificial intelligence takes over driving the car during tests of autonomous car abilities conducted by Continental AG on the A2 highway on April 25, 2018, near Hanover, Germany. Israeli artificial intelligence (AI) startup, Hailo Technologies, has closed a $12.5 million series A from Maniv Mobility, OurCrowd, and NextGear to develop a chip for deep learning on edge devices and processing of high-resolution sensory data in real time. According to a report from Markets and Markets, edge computing will be worth $6.72 billion by 2020, and IC Insights reported that integrated circuits in cars are expected to generate global sales of $42.9 billion in 2021. In 2017, McKinsey reported in the study, Self Driving Car Technology: when will robots hit the road?, that ADAS systems grew to 140 million in 2016 from 90 million units in 2014. "Because of the low latency required for autonomous driving and advanced driving assistance, deep learning with convolutional neural networks, running on in-vehicle hardware, is necessary," offers Tom Coughlin, IEEE Fellow and President at Coughlin Associates.


Machine Learning's Limits

#artificialintelligence

Semiconductor Engineering sat down with Rob Aitken, an Arm fellow; Raik Brinkmann, CEO of OneSpin Solutions; Patrick Soheili, vice president of business and corporate development at eSilicon; and Chris Rowen, CEO of Babblelabs. What follows are excerpts of that conversation. SE: Where are we with machine learning? What problems still have to be resolved? Aitken: We're in a state where things are changing so rapidly that it's really hard to keep up with where we are at any given instance. We've seen that machine learning has been able to take some of the things we used to think were very complicated and rendered them simple to do.