Deep Learning


Don't fall for the AI hype: Here are the ingredients you need to build an actual useful thing

#artificialintelligence

Artificial intelligence these days is sold as if it were a magic trick. Data is fed into a neural net – or black box – as a stream of jumbled numbers, and voilà! It comes out the other side completely transformed, like a rabbit pulled from a hat. That's possible in a lab, or even on a personal dev machine, with carefully cleaned and tuned data. However, it is takes a lot, an awful lot, of effort to scale machine-learning algorithms up to something resembling a multiuser service – something useful, in other words.


Artificial intelligence is now Intel's major focus

#artificialintelligence

At the forefront of these AI ambitions is a new platform called Nervana, which follows Intel's acquisition of deep-learning startup Nervana Systems earlier this year. Setting its sights on an area currently dominated by Nvidia's graphics processing unit (GPU) technology, one of the Nervana platform's main focuses will be deep learning and training neural networks – the software process behind machine learning that is based on a set of algorithms that attempt to model high-level abstractions in data. Google, for instance, is investing heavily in research exploring virtually all aspects of machine learning, including deep learning and more classical algorithms, something it calls "Machine Intelligence". One way is in manufacturing, as intelligent computer systems replace certain human-operated jobs.


Artificial Intelligence is now Intel's major focus

#artificialintelligence

With technology governing almost every aspect of our lives, industry experts are defining these modern times as the "platinum age of innovation"; verging on the threshold of discoveries that could change human society irreversibly, for better or worse. At the forefront of this revolution is the field of artificial intelligence (AI), a technology that is more vibrant than ever due to the acceleration of technological progress in machine learning - the process of giving computers with the ability to learn without being explicitly programmed - as well as the realisation by big tech vendors of its potential. One major tech behemoth fuelling the fire of this fast moving juggernaut called AI is Intel, a company that has long invested in the science and engineering of making computers more intelligent. The Californian company held an'AI Day' in San Francisco showcasing its new strategy dedicated solely to AI, with the introduction of new AI-specific products, as well as investments for the development of specific AI-related tech. And Alphr were in town to hear all about it.


Artificial intelligence is now Intel's major focus

#artificialintelligence

At the forefront of these AI ambitions is a new platform called Nervana, which follows Intel's acquisition of deep-learning startup Nervana Systems earlier this year. Setting its sights on an area currently dominated by Nvidia's graphics processing unit (GPU) technology, one of the Nervana platform's main focuses will be deep learning and training neural networks – the software process behind machine learning that is based on a set of algorithms that attempt to model high-level abstractions in data. Google, for instance, is investing heavily in research exploring virtually all aspects of machine learning, including deep learning and more classical algorithms, something it calls "Machine Intelligence". One way is in manufacturing, as intelligent computer systems replace certain human-operated jobs.


Questions To Ask When Moving Machine Learning From Practice to Production

#artificialintelligence

With growing interest in neural networks and deep learning, individuals and companies are claiming ever-increasing adoption rates of artificial intelligence into their daily workflows and product offerings. Coupled with breakneck speeds in AI-research, the new wave of popularity shows a lot of promise for solving some of the harder problems out there. That said, I feel that this field suffers from a gulf between appreciating these developments and subsequently deploying them to solve "real-world" tasks. A number of frameworks, tutorials and guides have popped up to democratize machine learning, but the steps that they prescribe often don't align with the fuzzier problems that need to be solved. This post is a collection of questions (with some (maybe even incorrect) answers) that are worth thinking about when applying machine learning in production.


Moving machine learning from practice to production

#artificialintelligence

With growing interest in neural networks and deep learning, individuals and companies are claiming ever-increasing adoption rates of artificial intelligence into their daily workflows and product offerings. Spending some time on planning your infrastructure, standardizing setup and defining workflows early-on can save valuable time with each additional model that you build. After building, training and deploying your models to production, the task is still not complete unless you have monitoring systems in place. Periodically saving production statistics (data samples, predicted results, outlier specifics) has proven invaluable in performing analytics (and error postmortems) over deployments.


Amazon Joins Tech Giants in Open Sourcing a Key Machine Learning Tool

#artificialintelligence

"DSSTNE (pronounced "Destiny") is an open source software library for training and deploying deep neural networks using GPUs. Amazon engineers built DSSTNE to solve deep learning problems at Amazon's scale. DSSTNE is built for production deployment of real-world deep learning applications, emphasizing speed and scale over experimental flexibility. "Deep Scalable Sparse Tensor Network Engine, (DSSTNE), pronounced "Destiny", is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models.