When Apple CEO Tim Cook introduced the iPhone X Tuesday he claimed it would "set the path for technology for the next decade." Some new features are superficial: a near-borderless OLED screen and the elimination of the traditional home button. Deep inside the phone, however, is an innovation likely to become standard in future smartphones, and crucial to the long-term dreams of Apple and its competitors. That feature is the "neural engine," part of the new A11 processor that Apple developed to power the iPhone X. The engine has circuits tuned to accelerate certain kinds of artificial-intelligence software, called artificial neural networks, that are good at processing images and speech.
A few years ago--the company won't say exactly when--some engineers at Apple began to think the iPhone's camera could be made smarter using newly powerful machine learning algorithms known as neural networks. Before long, they were talking with a lean vice president named Tim Millet. Millet leads a team of chip architects, who got to work. When the iPhone X was unveiled last fall, Apple's camera team had added a slick new portrait mode that can digitally adjust the lighting on subjects' faces, and artfully blur the background. It took advantage of a new module added to the iPhone's main chip called the neural engine, customized to run machine learning code.
The three new iPhones unveiled by Apple in the glassy circular headquarters on Wednesday have a close resemblance with last year's iPhone X. Going by the design the new iPhone devices' computational powers have got an invisible but more significant upgrade. Apple's phones are empowered with a new chip technology running on AI algorithms that assist the devices to understand the world around them. Explaining its latest offerings, Apple said that these improvements allow the new devices to enthral its users with slicker camera effects and augmented reality experiences. With the launch, Apple has allowed non-Apple developers to run their own algorithms on Apple's AI-specific hardware for the first time. This means the iTunes app store will come with new experiences on getting things done, socializing and creating art.
To build and run machine learning services you need computing power and data, and the more you have of each the more powerful your software can be. Image recognition is particularly good on mobile devices, says Song Han, a Stanford University graduate student working on compressing neural networks. He developed one such system that helps Facebook's augmented reality platform track objects. And Qualcomm, the leading chipmaker for Android devices, has been working on hardware tricks to speed up neural networks on mobile devices for some time.