NVIDIA Targets Next AI Frontiers: Inference And China

#artificialintelligence

NVIDIA's meteoric growth in the datacenter, where its business is now generating some $1.6B annually, has been largely driven by the demand to train deep neural networks for Machine Learning (ML) and Artificial Intelligence (AI)--an area where the computational requirements are simply mindboggling. Much of this business is coming from the largest datacenters in the US, including Amazon, Google, Facebook, IBM, and Microsoft. Recently, NVIDIA announced new technology and customer initiatives at its annual Beijing GTC event to help drive revenue in the inference market for Machine Learning, as well as solidify the company's position in the huge Chinese AI market. For those unfamiliar, inference is where the trained neural network is used to predict and classify sample data. It is likely that the inference market will eventually be larger, in terms of chip unit volumes, than the training market; after all, once you train a neural network, you probably intend to use it and use it a lot.


Microsoft Cranks AI Efforts Up To 11

Forbes - Tech

Microsoft is justifiably proud of its hardware/software co-design approach to accelerating a wide range of data center workloads using Intel FPGAs. The company recently shared some progress in this area and subsequently announced its acquisition of the AI startup Bonsai to ease the on-ramp for building AI on Microsoft. These advances give more advantages to the Microsoft AI strategy and warrant further analysis. I believe the company is very well-positioned to lead the penetration of AI into the enterprise market, where its productivity software and cloud success give it a springboard for growth. The Brainwave project uses large arrays of Intel FPGAs to accelerate deep neural network (DNN) inference processing for search, ad targeting, facial recognition, and more.


Take your machine-learning workloads to the edge? Yes, says Intel

#artificialintelligence

Sponsored Artificial intelligence and machine learning hold out the promise of enabling businesses to work smarter and faster, by improving and streamlining operations or offering firms the chance to gain a competitive advantage over their rivals. But where is best to host such applications – in the cloud, or locally, at the edge? Despite all the hype, it is early days for the technologies that we loosely label "AI", and many organisations lack the expertise and resources to really take advantage of it. Machine learning and deep learning often require teams of experts, for example, as well as access to large data sets for training, and specialised infrastructure with a considerable amount of processing power. This is because cloud service providers have a wealth of development tools and other resources readily available such as pre-trained deep neural networks for voice, text, image, and translation processing, according to Moor Insights & Strategy Senior Analyst Karl Freund.


AI Hardware: Harder Than It Looks

#artificialintelligence

The second AI HW Summit took place in the heart of Silicon Valley on September 17-18, with nearly fifty speakers presenting to over 500 attendees (almost twice the size of last year's inaugural audience). While I cannot possibly cover all the interesting companies on display in a short blog, there are a few observations I'd like to share. Computer architecture legend John Hennessy, Chairman of Alphabet and former President of Stanford University, set the stage for the event by describing how historical semiconductor trends, including the untimely demise of Moore's Law and Dennard scaling, led to the demand and opportunity for "Domain-Specific Architectures." This "DSA" concept applies not only to novel hardware designs but to the new software architecture of deep neural networks. The challenge is to create and train massive neural networks and then optimize those networks to run efficiently on a DSA, be it a CPU, GPU, TPU, ASIC, FPGA or ACAP, for "inference" processing of new input data.


Intel Unveils FPGA to Accelerate Neural Networks

#artificialintelligence

Intel today unveiled new hardware and software targeting the artificial intelligence (AI) market, which has emerged as a focus of investment for the largest data center operators. The chipmaker introduced an FPGA accelerator that offers more horsepower for companies developing new AI-powered services. The Intel Deep Learning Inference Accelerator (DLIA) combines traditional Intel CPUs with field programmable gate arrays (FPGAs), semiconductors that can be reprogrammed to perform specialized computing tasks. FPGAs allow users to tailor compute power to specific workloads or applications. The DLIA is the first hardware product emerging from Intel's $16 billion acquisition of Altera last year.