Goto

Collaborating Authors

basicmi/AI-Chip

#artificialintelligence

At Hot Chips 2019, Intel revealed new details of upcoming high-performance artificial intelligence (AI) accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. Myriad X is the first VPU to feature the Neural Compute Engine - a dedicated hardware accelerator for running on-device deep neural network applications. Interfacing directly with other key components via the intelligent memory fabric, the Neural Compute Engine is able to deliver industry leading performance per Watt without encountering common data flow bottlenecks encountered by other architectures. Qualcomm Technologies, Inc., a subsidiary of Qualcomm Incorporated (NASDAQ: QCOM), announced that it is bringing the Company's artificial intelligence (AI) expertise to the cloud with the Qualcomm Cloud AI 100. Built from the ground up to meet the explosive demand for AI inference processing in the cloud, the Qualcomm Cloud AI 100 utilizes the Company's heritage in advanced signal processing and power efficiency. Our 4th generation on-device AI engine is the ultimate personal assistant for camera, voice, XR and gaming – delivering smarter, faster and more secure experiences. Utilizing all cores, it packs 3 times the power of its predecessor for stellar on-device AI capabilities. With the open-source release of NVDLA's optimizing compiler on GitHub, system architects and software teams now have a starting point with the complete source for the world's first fully open software and hardware inference platform. The next generation of NVIDIA's GPU designs, Turing will be incorporating a number of new features and is rolling out this year. Nvidia launched its second-generation DGX system in March. In order to build the 2 petaflops half-precision DGX-2, Nvidia had to first design and build a new NVLink 2.0 switch chip, named NVSwitch.


Artificial Intelligence Is Driving A Silicon Renaissance

#artificialintelligence

Bay Area startup Cerebras Systems recently unveiled the largest computer chip in history, ... [ ] purpose-built for AI. The semiconductor is the foundational technology of the digital age. It gave Silicon Valley its name. It sits at the heart of the computing revolution that has transformed every facet of society over the past half-century. The pace of improvement in computing capabilities has been breathtaking and relentless since Intel introduced the world's first microprocessor in 1971.


Artificial Intelligence Is Driving A Silicon Renaissance

#artificialintelligence

Bay Area startup Cerebras Systems recently unveiled the largest computer chip in history, ... [ ] purpose-built for AI. The semiconductor is the foundational technology of the digital age. It gave Silicon Valley its name. It sits at the heart of the computing revolution that has transformed every facet of society over the past half-century. The pace of improvement in computing capabilities has been breathtaking and relentless since Intel introduced the world's first microprocessor in 1971.


Intel Talks at Hot Chips gear up for "AI Everywhere" - insideHPC

#artificialintelligence

Naveen Rao is vice president and general manager of the Artificial Intelligence Products Group at Intel Corporation. Today at Hot Chips 2019, Intel revealed new details of upcoming high-performance AI accelerators: Intel Nervana neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel Optane DC persistent memory and chiplet technology for optical I/O. To get to a future state of'AI everywhere,' we'll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it's collected when it makes sense and making smarter use of their upstream resources," said Naveen Rao, Intel vice president and GM, Artificial Intelligence Products Group. "Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications.


Back to the Edge: AI Will Force Distributed Intelligence Everywhere

#artificialintelligence

Other major firms are following suit. Microsoft has announced dedicated silicon hardware to accelerate deep-learning in its Azure cloud. And in July, the firm also revealed that its augmented reality headset, the Hololens, will have a customized chip in it to optimize machine learning applications. Apple has a long track-record of designing its own silicon for specialist requirements. Earlier this year Apple ended a relationship with Imagination Technologies, a firm that has been providing designs for GPUs in iPhones, in favor of its own GPU designs.