dennard
What's Old Is New Again
What's old is new again. At least, it is if we are talking about analog computing. The moment you hear the phrase "analog computing," you might be forgiven for thinking we are talking about the hipsters of the technology world. The people who prefer vinyl over Spotify. The ones that want to bring back typewriters to replace word processors, or the folks who prize handwritten notes over those generated by ChatGPT.
- North America > United States > Tennessee > Knox County > Knoxville (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.05)
- North America > United States > Florida > Hillsborough County > Tampa (0.05)
- North America > United States > California > Santa Clara County > Mountain View (0.05)
Chip Design Shifts As Fundamental Laws Run Out Of Steam
Dennard scaling is gone, Amdahl's Law is reaching its limit, and Moore's Law is becoming difficult and expensive to follow, particularly as power and performance benefits diminish. And while none of that has reduced opportunities for much faster, lower-power chips, it has significantly shifted the dynamics for their design and manufacturing. Rather than just different process nodes and half nodes, companies developing chips -- traditional chip companies, automotive OEMs, fabless and non-fabless IDMs, and large systems companies -- are now wrestling with more options and more unique challenges as they seek optimal solutions for their specific applications. And they are all demanding more from an EDA ecosystem, which is racing to keep up with these changes, including various types of advanced packaging, chiplets, and a demand for integrated and customized hardware and software. "While heterogeneous integration predates the end of Dennard scaling or flattening of Moore's Law by several years, silicon designers and system architects are embracing this paradigm now to retain their pursuit of PPA goals -- without empirical law and its derivatives," said Saugat Sen, vice president of R&D at Cadence. "While there are many architectural and design challenges in this era, addressing thermal concerns rise to the top. Efficiency in design and implementation has been intricately linked to closed-loop integration with multi-physics analyses for awhile. More-than-Moore has created a compelling case for the implementation-analyses microcosm to transcend across the fabrics of system design, from silicon to package, and even beyond, and more so in the systems companies that are at the bleeding edge of design innovation."
AI Accelerators -- Part II: Transistors and Pizza (or: Why Do We Need Accelerators)?
We arrived at the key motivator of the entire series; a fundamental question often asked by venture capitalists when being pitched for a new startup or executives when being pitched for a new project: "why now?" To answer that, we will have a crash course on the history of processors and what significant changes the industry underwent in recent years. Simplistically speaking, the processor is the part of the computer system in charge of the actual computing of numbers. It receives user input data (represented as numerics) and generates new data per the user's request, i.e., as reflected by the set of arithmetic operations the user wishes to perform. The processor employs its arithmetic units to generate the computation result, which means running the program. Processors were commoditized in personal computers in the 1980s.
- Information Technology (0.69)
- Banking & Finance (0.48)
- Semiconductors & Electronics (0.48)
A New Golden Age for Computer Architecture
We began our Turing Lecture June 4, 201811 with a review of computer architecture since the 1960s. In addition to that review, here, we highlight current challenges and identify future opportunities, projecting another golden age for the field of computer architecture in the next decade, much like the 1980s when we did the research that led to our award, delivering gains in cost, energy, and security, as well as performance. "Those who cannot remember the past are condemned to repeat it."--George Software talks to hardware through a vocabulary called an instruction set architecture (ISA). By the early 1960s, IBM had four incompatible lines of computers, each with its own ISA, software stack, I/O system, and market niche--targeting small business, large business, scientific, and real time, respectively. IBM engineers, including ACM A.M. Turing Award laureate Fred Brooks, Jr., thought they could create a single ISA that would efficiently unify all four of these ISA bases. They needed a technical solution for how computers as inexpensive as those with 8-bit data paths and as fast as those with 64-bit data paths could share a single ISA. The data paths are the "brawn" of the processor in that they perform the arithmetic but are relatively easy to "widen" or "narrow." The greatest challenge for computer designers then and now is the "brains" of the processor--the control hardware. Inspired by software programming, computing pioneer and Turing laureate Maurice Wilkes proposed how to simplify control. Control was specified as a two-dimensional array he called a "control store." Each column of the array corresponded to one control line, each row was a microinstruction, and writing microinstructions was called microprogramming.39 A control store contains an ISA interpreter written using microinstructions, so execution of a conventional instruction takes several microinstructions. The control store was implemented through memory, which was much less costly than logic gates. The table here lists four models of the new System/360 ISA IBM announced April 7, 1964. The data paths vary by a factor of 8, memory capacity by a factor of 16, clock rate by nearly 4, performance by 50, and cost by nearly 6.
- North America > United States > California > San Francisco County > San Francisco (0.15)
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- (13 more...)
- Semiconductors & Electronics (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services (0.68)
- Information Technology > Software > Programming Languages (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Hardware (1.00)
- (3 more...)
GPUs to Run 1000 Times Faster by 2025 - Huang
The CEO being referred to is NVIDIA (NASDAQ:NVDA) co-founder Jensen Huang, and he verbalized his insights on Moore's Law (and life after it) during the recently concluded Computex 2017 event held in Taipei. Moore's Law, named after Intel co-founder Gordon Moore, is based on Moore's observation that because the size of transistors was shrinking so rapidly, the number of transistors that could fit per square inch on integrated circuits seemed to double every year since they were invented. Moore's prediction is that such trend will continue into the future. And although the pace may have slowed down, the number of transistors that could fit per square inch did continue to increase, doubling not every year but after every 18 months instead. With this exponential growth, computers became twice as powerful, benefiting not just the consumers but the device manufacturers as well. This went on for awhile (as predicted).
- Semiconductors & Electronics (1.00)
- Information Technology > Hardware (0.44)
Exponential Laws of Computing Growth
In a forecasting exercise, Gordon Earle Moore, co-founder of Intel, plotted data on the number of components--transistors, resistors, and capacitors--in chips made from 1959 to 1965. He saw an approximate straight line on log paper (see Figure 1). Extrapolating the line, he speculated that the number of components would grow from 26 in 1965 to 216 in 1975, doubling every year. His 1965–1975 forecast came true. In 1975, with more data, he revised the estimate of the doubling period to two years. In those days, doubling components also doubled chip speed because the greater number of components could perform more powerful operations and smaller circuits allowed faster clock speeds. Later, Moore's Intel colleague David House claimed the doubling time for speed should be taken as 18 months because of increasing clock speed, whereas Moore maintained that the doubling time for components was 24 months. But clock speed stabilized around 2000 because faster speeds caused more heat dissipation than chips could withstand. Since then, the faster speeds are achieved with multi-core chips at the same clock frequency. Moore's Law is one of the most durable technology forecasts ever made.10,20,31,33 It is the emblem of the information age, the relentless march of the computer chip enabling a technical, economic, and social revolution never before experienced by humanity. The standard explanation for Moore's Law is that the law is not really a law at all, but only an empirical, self-fulfilling relationship driven by economic forces. This explanation is too weak, however, to explain why the law has worked for more than 50 years and why exponential growth works not only at the chip level but also at the system and market levels. Consider two prominent cases of systems evolution.
- North America > United States > New York (0.05)
- North America > United States > California > Monterey County > Monterey (0.05)
- North America > United States > California > Santa Clara County > San Jose (0.04)
- (5 more...)
- Semiconductors & Electronics (1.00)
- Information Technology (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Energy (1.00)
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (0.46)
- Information Technology > Artificial Intelligence > Machine Learning (0.46)