The rapid adoption of artificial intelligence (AI) for practical business applications has introduced a number of uncertainties and risk factors across virtually every industry, but one fact is certain: in today's AI market, hardware is the key to solving many of the sector's key challenges, and chipsets are at the heart of that hardware solution. Given the widespread applicability of AI, it is almost certain that every chip in the future will have some sort of AI engine embedded. The engine could take a wide variety of forms, ranging from a simple AI library running on a CPU to more sophisticated custom hardware. The potential for AI is best fulfilled when the chipsets are optimized to provide the appropriate amount of compute capacity at the right power budget for specific AI applications, a trend that is leading to increasing specialization and diversification in AI-optimized chipsets. During the past 2 years, the deep learning chipset market has experienced a dramatic period of evolution, led by NVIDIA and Intel.
Artificial intelligence (AI) and deep learning have generated lot of excitement over the past few years. Many semiconductor startups have emerged to build chipsets optimized for AI. They are tackling compute, communication, and memory-related problems specific to AI algorithm accelerations and building highly optimized architectures that promise low power and high performance. Nervana was perhaps the first company to build a chipset specifically for AI, which got started in 2014. Nervana wanted to sell cloud services based on its chipsets and bypass the application-specific integrated circuits (ASICs) altogether.
The opportunity for AI accelerator chips is much-hyped, but how big is the market, and which companies are actually selling chips today? EETimes spoke to the reports' author, Principal Analyst Lian Jye Su, to gain some insight into which companies and technologies are making inroads into this potentially lucrative market. AI in the Cloud The first report, "Cloud AI Chipsets: Market Landscape and Vendor Positioning," highlights how cloud AI inference and training services are growing rapidly. The resulting AI chipset market is expected to grow from US$4.2 billion in 2019 to US$10 billion in 2024. Nvidia and Intel, the current leaders in this space, are being challenged by companies including Cambricon Technologies, Graphcore, Habana Labs and Qualcomm.
The principal tasks of artificial intelligence (AI) are training and inferencing. The former is a data-intensive process to prepare AI models for production applications. Training an AI model ensures that it can perform its designated inferencing task--such as recognizing faces or understanding human speech--accurately and in an automated fashion. Inferencing is big business and is set to become the biggest driver of growth in AI. McKinsey has predicted that the opportunity for AI inferencing hardware in the data center will be twice that of AI training hardware by 2025 ($9 billion to 10 billion vs. $4 billion to $5 billion today).