New computational algorithms make it possible to build neural networks with many input nodes and many layers, and distinguish "deep learning" of these networks from previous work on artificial neural nets.
With technology governing almost every aspect of our lives, industry experts are defining these modern times as the "platinum age of innovation"; verging on the threshold of discoveries that could change human society irreversibly, for better or worse. At the forefront of this revolution is the field of artificial intelligence (AI), a technology that is more vibrant than ever due to the acceleration of technological progress in machine learning - the process of giving computers with the ability to learn without being explicitly programmed - as well as the realisation by big tech vendors of its potential. One major tech behemoth fuelling the fire of this fast moving juggernaut called AI is Intel, a company that has long invested in the science and engineering of making computers more intelligent. The Californian company held an'AI Day' in San Francisco showcasing its new strategy dedicated solely to AI, with the introduction of new AI-specific products, as well as investments for the development of specific AI-related tech. And Alphr were in town to hear all about it.
With growing interest in neural networks and deep learning, individuals and companies are claiming ever-increasing adoption rates of artificial intelligence into their daily workflows and product offerings. Coupled with breakneck speeds in AI-research, the new wave of popularity shows a lot of promise for solving some of the harder problems out there. That said, I feel that this field suffers from a gulf between appreciating these developments and subsequently deploying them to solve "real-world" tasks. A number of frameworks, tutorials and guides have popped up to democratize machine learning, but the steps that they prescribe often don't align with the fuzzier problems that need to be solved. This post is a collection of questions (with some (maybe even incorrect) answers) that are worth thinking about when applying machine learning in production.
"DSSTNE (pronounced "Destiny") is an open source software library for training and deploying deep neural networks using GPUs. Amazon engineers built DSSTNE to solve deep learning problems at Amazon's scale. DSSTNE is built for production deployment of real-world deep learning applications, emphasizing speed and scale over experimental flexibility. "Deep Scalable Sparse Tensor Network Engine, (DSSTNE), pronounced "Destiny", is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models.