Goto

Collaborating Authors

Artificial intelligence simulates microprocessor performance in real-time-Mis-aisa-The latest News,Tech,Industry,Environment,Low Carbon,Resource,Innovations.

#artificialintelligence

This approach is detailed in a paper presented at MICRO-54: the 54th IEEE/ACM International Symposium on MicroArchitecture.Micro-54 is one of the top conferences in the field of computer architecture and was selected as the conference's best publication. "This is a problem that needs to be studied in-depth and has traditionally relied on additional circuits to solve it," said Zhiyao Xie, lead author of the paper and a doctoral candidate in the lab of Yiran Chen, a professor of electrical and computer engineering at Duke."But our approach runs directly on microprocessors in the background, which opens up a lot of new opportunities. I think that's why people are excited about it." In modern computer processors, the computation cycle is 3 trillion times per second. Tracking the energy consumed for such a fast conversion is important to maintaining the performance and efficiency of the entire chip.


Research Bits: April 19

#artificialintelligence

Processor power prediction Researchers from Duke University, Arm Research, and Texas A&M University developed an AI method for predicting the power consumption of a processor, returning results more than a trillion times per second while consuming very little power itself. "This is an intensively studied problem that has traditionally relied on extra circuitry to address," said Zhiyao Xie, a PhD candidate at Duke. "But our approach runs directly on the microprocessor in the background, which opens many new opportunities. I think that's why people are excited about it." The approach, called APOLLO, uses an AI algorithm to identify and select just 100 of a processor's millions of signals that correlate most closely with its power consumption. It then builds a power consumption model off of those 100 signals and monitors them to predict the entire chip's performance in real-time.


Architecture Exploration of AI/ML Applications and Processors

#artificialintelligence

Artificial Intelligence (AI) applications take into consideration the compute, storage, memory, pipeline, communication interface, software, and control. Further, AI application processing can be distributed across multi-core within processors, multiple processor boards on a PCIe backbone, computers distributed across an ethernet network, high-performance computer, or system across a data center. In addition, AI processors also have a massive memory size requirement, access time limitation, distribution across analog and digital, and hardware-software partition. Architecture exploration of AI applications is complex and involves multiple studies. To start with, we can target a single problem such as memory access or can look at the full processor or system.


NXP And Kalray Team Up On High-Performance Automated Driving Compute Platform

#artificialintelligence

When the discussion turns to the computing platforms expected to power the coming wave of automated vehicles (AVs), there are basically two companies that seem to get all of the attention, Nvidia and Intel with its Mobileye subsidiary. Most of the companies developing AVs that have acknowledged their choice of chip providers are using hardware from one of those two companies. But if you open up the electronic control units in today's cars and trucks, there's a good chance that you will find chips from Eindhoven, Netherlands based NXP and they are fighting to keep a place in future vehicles through a new partnership with Kalray. French chip design firm Kalray has been around for about a decade and they specialize in processors that can run many calculations in parallel simultaneously. Kalray's Massively Parallel Processor Array (MPPA) chips are particularly adept at handling the type of machine learning and neural network algorithms that are at the heart of most automated driving perception systems.


Brain-inspired computing boosted by new concept of completeness

#artificialintelligence

The next generation of high-performance, low-power computer systems might be inspired by the brain. However, as designers move away from conventional computer technology towards brain-inspired (neuromorphic) systems, they must also move away from the established formal hierarchy that underpins conventional machines -- that is, the abstract framework that broadly defines how software is processed by a digital computer and converted into operations that run on the machine's hardware. This hierarchy has helped enable the rapid growth in computer performance. Writing in Nature, Zhang et al.1 define a new hierarchy that formalizes the requirements of algorithms and their implementation on a range of neuromorphic systems, thereby laying the foundations for a structured approach to research in which algorithms and hardware for brain-inspired computers can be designed separately. The performance of conventional digital computers has improved over the past 50 years in accordance with Moore's law, which states that technical advances will enable integrated circuits (microchips) to double their resources approximately every 18–24 months.