Intel on Tuesday is taking the wraps off of the Nervana Neural Network Processor (NNP), formerly known as "Lake Crest," a chip three years in the making that's designed expressly for AI and deep learning. Along with explaining its unique architecture, Intel announced that Facebook has been a close collaborator as it prepares to bring the Nervana NNP to market. The chipmaker also laid out the beginnings of a product roadmap. While there are platforms available for deep learning applications, this is the first of its kind -- built from the ground up for AI -- that's commercially available, Naveen Rao, corporate VP of Intel's Artificial Intelligence Products Group, told ZDNet. It's rare for Intel to deliver a whole new class of products, he said, so the Nervana NNP family demonstrates Intel's commitment to the AI space.
When the AI boom came a-knocking, Intel wasn't around to answer the call. Now, the company is attempting to reassert its authority in the silicon business by unveiling a new family of chips designed especially for artificial intelligence: the Intel Nervana Neural Network Processor family, or NNP for short. The NNP family is meant as a response to the needs of machine learning, and is destined for the data center, not your PC. Intel's CPUs may still be a stalwart of server stacks (by some estimates, it has a 96 percent market share in data centers), but the workloads of contemporary AI are much better served by the graphical processors or GPUs coming from firms like Nvidia and ARM. Consequently, demand for these companies' chips has skyrocketed.
Intel enlisted one of the most enthusiastic users of deep learning and artificial intelligence to help out with the chip design. "We are thrilled to have Facebook in close collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement. On top of social media, Intel is targeting healthcare, automotive and weather, among other applications. Unlike its PC chips, the Nervana NNP is an application-specific integrated circuit (ASIC) that's specially made for both training and executing deep learning algorithms. "The speed and computational efficiency of deep learning can be greatly advanced by ASICs that are customized for ... this workload," writes Intel's VP of AI, Naveen Rao.
The rise of robots could lead to'unprecedented' change and wipe out over a third of jobs in some areas by the 2030's a new report warns. A'heat map' of Britain shows the areas most at risk of automation, with workers in the ex industrial heartlands of the North and Midlands most likely to lose their jobs. The upheaval tossed up by'supercharged' technological change over the next 15 years could make the industrial revolution pale in comparison, the study says. The report, The impact of AI in UK constituencies, by think-tank Future Advocacy, slams the government for failing to prepare for the rapid change looming. Researchers said the results are'startling' and told ministers to urgently look at new education and training to help the country adapt to the challenge.
Yesterday I had the chance to meet Andreas Liebl, partner at UnternehmerTUM, who launched a programme Applied.AI, wherein people from across the world are invited to apply. If you are an engineer, or a professional, and wish to launch an application or gain expertise in the field of AI, this is the programme for you. The institute was founded by entrepreneur Susanne Klatten in 2002 and is supported by several corporations such as Nvidia, Mercedes Benz and SAP, to name a few. What I found interesting about the institute is that not only do they train you, but can also support you, should you wish to start up. The institute has a venture capital arm that can facilitate investments towards startups with promising ideas.
Blocks, a Theano framework for training neural networks Caffe, a deep learning framework made with expression, speed, and modularity in mind. It can model arbitrary layer connectivity and network depth. Any directed acyclic graph of layers will do. Training is done using the back-propagation algorithm. ConvNet, a Matlab based convolutional neural network toolbox - a type of deep learning, can learn useful features from raw data by itself.
As an Indian guy living in the US, I have a constant flow of money from home to me and vice versa. If the USD is stronger in the market, then the Indian rupee (INR) goes down, hence, a person from India buys a dollar for more rupees. If the dollar is weaker, you spend less rupees to buy the same dollar. If one can predict how much a dollar will cost tomorrow, then this can guide one's decision making and can be very important in minimizing risks and maximizing returns. Looking at the strengths of a neural network, especially a recurrent neural network, I came up with the idea of predicting the exchange rate between the USD and the INR.
Last week I was fortunate enough to have attended the RE•WORK Deep Learning Summit Montreal (October 10 & 11), and was able to take in a number of quality talks and meet with other attendees. The conference was split into 2 tracks -- Research Advancements and Business Applications -- and featured a wide array of top neural networks researchers and academics, as well as business leaders. An interesting mix of both industry and academic, RE•WORK did more than enough to prove their professionalism and attention to detail, and this is without mentioning the calibre of speakers they secured for the event. What follows is a summary of some of my favorite talks from the conference, with this selection revolving around the visual reasoning & computer vision blocks which started the conference off. A full listing of the speakers and schedule can be found here.
In this step-by-step Keras tutorial, you'll learn how to build a convolutional neural network in Python! In fact, we'll be training a classifier for handwritten digits that boasts over 99% accuracy on the famous MNIST dataset. Before we begin, we should note that this guide is geared toward beginners who are interested in applied deep learning. Our goal is to introduce you to one of the most popular and powerful libraries for building neural networks in Python. That means we'll brush over much of the theory and math, but we'll also point you to great resources for learning those.
With the world's biggest collection of open source data, GitHub's Data Science Team has just started exploring how we can use machine learning to make the developer experience better. I see machine learning shaping experiences around me every day, and I'm excited about what's to come in applying it to create more useful, predictive technologies. In this collection, I'll share the basics of machine learning, along with some related resources and projects for people who are getting started with it. Machine learning is the study of algorithms that use data to learn, generalize, and predict. What makes machine learning exciting is that with more data, the algorithm improves its prediction.