A few months ago, I published a blog that highlighted Qualcomm's plans to enter the data center market with the Cloud AI100 chip sometime next year. While preparing the blog, our founder and principal analyst, Patrick Moorhead, called to point out that Qualcomm, not NVIDIA, probably has the largest market share in AI chip volume thanks to its leadership in devices for smartphones. Turns out, we were both right; it just depends on what you are counting. In the mobile and embedded space, Qualcomm powers hundreds of consumer and embedded devices running AI; it has shipped well over one billion Snapdragons and counting, all which support some level of AI today. In the data center, however, NVIDIA likely has well over 90% share of the market for training.
Micron Technology introduced what it claims is the world's fastest solid state drive (SSD), and also announced the acquisition of FWDNXT (pronounced "forward next"), a startup that specializes in neural networking with a product lineup that includes a series of inference engine modules based on Xilinx FPGAs. The announcements were made at the Micron Insight conference held at Pier 27 in San Francisco. The conference focused on how to accelerate intelligent systems by improving data access and analysis speed in edge devices. Micron CEO Sanjay Mehrotra said in his opening remarks at the conference that the company shipped 6 million wafers (including DRAM/3D XPoint/NAND) in fiscal year 2019, which translates into roughly 3 billion solutions for game systems, mobile phones, Internet of things applications, smart factories, and more. Sumit Sadana, executive vice president and chief commercial officer of Micron, then unveiled several heavyweight products at the conference.
Over the last years, the computational power of mobile devices such as smartphones and tablets has grown dramatically, reaching the level of desktop computers available not long ago. While standard smartphone apps are no longer a problem for them, there is still a group of tasks that can easily challenge even high-end devices, namely running artificial intelligence algorithms. In this paper, we present a study of the current state of deep learning in the Android ecosystem and describe available frameworks, programming models and the limitations of running AI on smartphones. We give an overview of the hardware acceleration resources available on four main mobile chipset platforms: Qualcomm, HiSilicon, MediaTek and Samsung. Additionally, we present the real-world performance results of different mobile SoCs collected with AI Benchmark that are covering all main existing hardware configurations.
Artificial intelligence is permeating everybody's lives through the face recognition, voice recognition, image analysis and natural language processing capabilities built into their smartphones and consumer appliances. Over the next several years, most new consumer devices will run AI natively, locally and, to an increasing extent, autonomously. But there's a problem: Traditional processors in most mobile devices aren't optimized for AI, which tends to consume a lot of processing, memory, data and battery on these resource-constrained devices. As a result, AI has tended to execute slowly on mobile and "internet of things" endpoints, while draining their batteries rapidly, consuming inordinate wireless bandwidth and exposing sensitive local information as data makes roundtrips in the cloud. That's why mass-market mobile and IoT edge devices are increasingly coming equipped with systems-on-a-chip that are optimized for local AI processing.
Not surprisingly, this year's smartphones feature faster processors than those from last year--that happens every year. But what is new this year is the predominance of machine learning features that just about every processor vendor is touting as a way of differentiating their devices. This is true for the phone vendors who design their own chips, the independent or merchant chip vendors who sell processors to phone vendors, and even the IP makers who design the cores that go into the processors themselves. First a little background: all modern application processors include designs (often referred to as intellectual property, or IP) from other companies, notably firms like ARM, Imagination Technologies, MIPS, and Ceva. Such IP can appear in various forms--for example, ARM sells everything from a basic license for its 32-bit and 64-bit architecture, to specific cores for CPUs, graphics, image processing, etc., that chip designers can then use to create processors.
Just before the Mobile World Congress (MWC) confab, where techies meet to enjoy tapas in Barcelona while learning about all things mobile, Qualcomm announced a new software suite. These new offerings seek to enable AI capabilities on existing Snapdragon mobile platforms. While most training of Deep Neural Networks is done in the cloud on NVIDIA GPUs, the use of these AI networks can often be done (calculated) on a mobile CPU--especially if that CPU is more than just a CPU. For Qualcomm, this is a statement of the company's intent to compete with Apple on smarter apps and smarter phones.
Samsung has hinted that the Galaxy S9 might include more advanced face recognition, but we're now getting clues as to what's involved. SamCentral's sleuthing in the settings APK for the Galaxy Note 8's Oreo beta has discovered a hidden Intelligent Scan feature that uses both camera-based face detection and the iris scanner in tandem for "better accuracy and security" and improved results in "low or very bright" lighting. Given that the iris scanning on the S8 and Note 8 can be finnicky, this could deliver a much more consistent experience when you're unlocking your phone or accessing secure info.
Samsung Electronics is expected to unveil its highly anticipated Galaxy S9 flagship phone next month. Ahead of its launch, the South Korean tech giant has announced its new Exynos 9 Series 9810 processor, which features a powerful custom CPU, very fast gigabit LTE modem and sophisticated deep learning capabilities.
When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would "set the path for technology for the next decade." Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors. That feature is the "neural engine," part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.