Santa Clara, California AI vision silicon company, Ambarella, Inc., is supplying its CV25AQ CVflow AI vision processor to Great Wall Motor Co. Ltd. for the WEY Mocha crossover SUV. The System on Chip (SoC) provides a variety of simultaneous, multi-camera channel combinations for recording and/or in-cabin sensing, with the entire system meeting Euro NCAP 2025 safety standards. The automotive-qualified AEC-Q100 Grade 2 Ambarella CV25AQ will serve as part of Great Wall's "Coffee Intelligence" AI driving platform, which comprises Intelligent Cockpit Systems, Intelligent Drive, and Intelligent Automotive Electronic and Electrical Architecture Technology. WEY is Great Wall Motors' premium brand, named after company founder Wei Jianjun. "Ambarella and GWM have a strong history of successful collaboration, with several generations of vision systems already in production for a variety of car models," said Fermi Wang, CEO of Ambarella.
LAS VEGAS -- Ambarella, Inc. (Nasdaq: AMBA), an artificial intelligence (AI) vision silicon company, today announced that Ambarella and Amazon Web Services, Inc. (AWS) customers can now use Amazon SageMaker Neo to train machine learning (ML) models once and run them on any device equipped with an Ambarella CVflow -powered AI vision system on chip (SoC). Until now, developers had to manually optimize ML models for devices based on Ambarella AI vision SoCs. This step could add considerable delays and errors to the application development process. Ambarella and AWS collaborated to simplify the process by integrating the Ambarella toolchain with the Amazon SageMaker Neo cloud service. Now, developers can simply bring their trained models to Amazon SageMaker Neo and automatically optimize the model for Ambarella CVflow-powered SoCs.
LAS VEGAS--(BUSINESS WIRE)--Ambarella, Inc. (Nasdaq: AMBA), an artificial intelligence (AI) vision silicon company, today announced that Ambarella and Amazon Web Services, Inc. (AWS) customers can now use Amazon SageMaker Neo to train machine learning (ML) models once and run them on any device equipped with an Ambarella CVflow -powered AI vision system on chip (SoC). Until now, developers had to manually optimize ML models for devices based on Ambarella AI vision SoCs. This step could add considerable delays and errors to the application development process. Ambarella and AWS collaborated to simplify the process by integrating the Ambarella toolchain with the Amazon SageMaker Neo cloud service. Now, developers can simply bring their trained models to Amazon SageMaker Neo and automatically optimize the model for Ambarella CVflow-powered SoCs.
Ambarella has begun sampling a 10nm "CV28M" camera SoC for edge AI that runs Linux on dual 1GHz Cortex-A53 cores and offers CVflow CNN processing, a 320MP/s ISP, 4Kp30 encoding, and security features. Five years have passed since we covered a new Ambarella camera SoC, which is a shame since it's so much fun saying "Ambarella." Since the announcement of its HD-ready, Cortex-A9 based S2Lm, we have mentioned the company in passing for its part in a Linux- and Jetson-driven Teal One drone, which uses Ambarella's [email protected] ready, quad -A53 CV2. Now Ambarella has launched the CV28M, the latest in its CVflow family of AI-enabled computer vision processors. Like the other Ambarella SoCs, it is available with a Linux SDK and evaluation kit.
The AI chip company, Ambarella, made the news at the CES 2022. The chip family is the latest addition to the CVflow family of scalable, power-efficient system-on-chips for the automobile sector. The chip, according to Ambarella, offers the most fantastic AI processing performance, with up to 500 eTOPS, a 42-fold improvement over Ambarella's previous automotive family. With up to 16 Arm Cortex-A78AE CPU cores, the CV3 boosts CPU performance by up to 30 times over the previous generation, making it ideal for autonomous vehicle (AV) software applications. Consequently, robust advanced driver assistance systems (ADAS) and L2 to Level 4 autonomous driving (AD) systems with higher degrees of environmental awareness for both driver seeing and machine perception in demanding lighting, weather, and driving situations have been developed.