Goto

Collaborating Authors

Results


Ceremorphic Touts Its HPC/AI Silicon Technology as It Exits Stealth

#artificialintelligence

In a market still filling with fledging silicon chips, Ceremorphic, Inc. has exited stealth and is telling the world about what it calls its patented new ThreadArch multi-thread processor technology that is intended to help improve new supercomputers. Venkat Mattela, the company's founder and CEO of Ceremorphic, calls his latest chip design a Hierarchical Learning Processor (HLP), even though several technology analysts said they recognize it as a system on a chip (SoC) design. The goal of the company is to design, benchmark and market a new kind of ultra-low-power AI training chip. "What we are trying to solve is today – everybody knows how to do higher performance – you can buy an Nvidia machine," Mattela told HPCwire. "Can we have the highest performance in a reliable way? Architecture is how we achieve it," using multiple processors, a multiple logic design and mixing and matching it all.


basicmi/AI-Chip

#artificialintelligence

At Hot Chips 2019, Intel revealed new details of upcoming high-performance artificial intelligence (AI) accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. Myriad X is the first VPU to feature the Neural Compute Engine - a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver industry leading performance per Watt without encountering common data flow bottlenecks encountered by other architectures. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), announced that it is bringing the Company's artificial intelligence (AI) expertise to the cloud with the Qualcomm Cloud AI 100. Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilizes the Company's heritage in advanced signal processing and power efficiency. Our 4th generation on-device AI engine is the ultimate personal assistant for camera, voice, XR and gaming – delivering smarter, faster and more secure experiences. Utilizing all cores, it packs 3 times the power of its predecessor for stellar on-device AI capabilities. With the open-source release of NVDLA's optimizing compiler on GitHub, system architects and software teams now have a starting point with the complete source for the world's first fully open software and hardware inference platform. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch.