Goto

Collaborating Authors

 lightmatter


Controlling AI's Growing Energy Needs

Communications of the ACM

The huge amount of energy required to train artificial intelligence (AI) is becoming a concern. To train the large language model (LLM) powering Chat GPT-3, for example, almost 1,300 megawatt hours of energy was used, according to an estimate by researchers from Google and the University of California, Berkeley, a similar quantity of energy to what is used by 130 American homes in one year. Furthermore, an analysis by OpenAI suggests that the amount of power needed to train AI models has been growing exponentially since 2012, doubling roughly every 3.4 months as the models become bigger and more sophisticated. However, our energy production capacity is not increasing as steeply, and doing so is likely to further contribute to global warming: generating electricity is the single biggest contributor to climate change given that coal, oil, and gas are still widely used to generate electricity, compared to cleaner energy sources. "At this rate, we are running into a brick wall in terms of the ability to scale up machine learning networks," said Menachem Stern, a theoretical physicist at the AMOLF research institute in the Netherlands.


To Build a Better AI Supercomputer, Let There Be Light

WIRED

Most artificial intelligence experts seem to agree that taking the next big leap in the field will depend at least partly on building supercomputers on a once unimaginable scale. At an event hosted by the venture capital firm Sequoia last month, the CEO of a startup called Lightmatter pitched a technology that might well enable this hyperscale computing rethink by letting chips talk directly to one another using light. Data today generally moves around inside computers--and in the case of training AI algorithms, between chips inside a data center--via electrical signals. Sometimes parts of those interconnections are converted to fiber-optic links for great bandwidth, but converting signals back and forth between optical and electrical creates a communications bottleneck. Instead, Lightmatter wants to directly connect hundreds of thousands or even millions of GPUs--those silicon chips that are crucial to AI training--using optical links.


Machine Learning Research Engineer at Lightmatter - Mountain View, California, United States

#artificialintelligence

The AI age is upon us and high performance computing is the underlying platform powering everything from Large Language Models (LLM) to Image synthesis from text. However, with the demise of Moore's law and Dennard scaling we are at an inflection point. At Lightmatter, we are leading the transition of computing from traditional electronic transistors to photonic technologies which can operate at mind blowing efficiency and throughput. In this role, you will support all the activities of the ML team as it guides the development of a new class of computing infrastructure. This includes fine tuning LLMs, enablement of new models on custom architectures, evaluating the performance of models at scale, developing abstract models of the hardware for evaluating accuracy and throughput and help co-design novel hardware in a new paradigm of computing.


Computing With Light

#artificialintelligence

This article will look at a technology for AI inference processing using light rather than electrons from LIghtmatter and combined with traditional CMOS including SRAM memory. This article is based upon an interview with Lightmatter CEO, Nick Harris. The company sees this product being useful for data center inference and perhaps eventually in some AI computation intensive industrial and consumer applications (such as autonomous vehicles). There are widely cited forecasts that project accelerating information and communications technology (ICT) energy consumption increases through the 2020's with a 2018 Nature article estimating that if current trends continue, this will consume more than 20% of electricity demand by 2030. At several industry events I have heard talks that say one of the important limits of data center performance will be the amount of energy consumed.


Developers Turn To Analog For Neural Nets

#artificialintelligence

Machine-learning (ML) solutions are proliferating across a wide variety of industries, but the overwhelming majority of the commercial implementations still rely on digital logic for their solution. With the exception of in-memory computing, analog solutions mostly have been restricted to universities and attempts at neuromorphic computing. However, that's starting to change. "Everyone's looking at the fact that deep neural networks are so energy-intensive when you implement them in digital, because you've got all these multiply-and-accumulates, and they're so deep, that they can suck up enormous amounts of power," said Elias Fallon, software engineering group director for the Custom IC & PCB Group at Cadence. Some suggest we're reaching a limit with digital. "Digital architectural approaches have hit the wall to solve the deep neural network MAC (multiply-accumulate) operations," said Sumit Vishwakarma, product manager at Siemens EDA. "As the size of the DNN increases, weight access operations result in huge energy consumption." The current analog approaches aren't attempting to define an entirely new ML paradigm. "The last 50 years have all been focused on digital processing, and for good reason," said Thomas Doyle, CEO and co-founder of Aspinity.


This Chip for AI Works Using Light, Not Electrons

#artificialintelligence

As demand for artificial intelligence grows, so does hunger for the computer power needed to keep AI running. Lightmatter, a startup born at MIT, is betting that AI's voracious hunger will spawn demand for a fundamentally different kind of computer chip--one that uses light to perform key calculations. "Either we invent new kinds of computers to continue," says Lightmatter CEO Nick Harris, "or AI slows down." Conventional computer chips work by using transistors to control the flow of electrons through a semiconductor. By reducing information to a series of 1s and 0s, these chips can perform a wide array of logical operations, and power complex software.


This Chip for AI Works Using Light, Not Electrons

WIRED

As demand for artificial intelligence grows, so does hunger for the computer power needed to keep AI running. Lightmatter, a startup born at MIT, is betting that AI's voracious hunger will spawn demand for a fundamentally different kind of computer chip--one that uses light to perform key calculations. "Either we invent new kinds of computers to continue," says Lightmatter CEO Nick Harris, "or AI slows down." Conventional computer chips work by using transistors to control the flow of electrons through a semiconductor. By reducing information to a series of 1s and 0s, these chips can perform a wide array of logical operations, and power complex software.


Photonics Processor Aimed at AI Inference

#artificialintelligence

Silicon photonics is exhibiting greater innovation as requirements grow to enable faster, lower-power chip interconnects for traditionally power-hungry applications like AI inferencing. With that in mind, scientists at Massachusetts Institute of Technology launched a startup in 2017 called Lightmatter Inc. to develop silicon photonic processors. Another goal was leveraging optical computing to "decouple" AI processing from Moore's law scaling that according to the company founders literally produces more heat than light. Lightmatter announced an AI photonic "test chip" during this week's Hot Chips conference positioned as an AI inference accelerator using light to process and transport data. The 3D module incorporates a 12- and 90-nm ASIC, the latter supporting photonics processing steps such as laser monitoring and light distribution.


Photonics startup Lightmatter details its AI optical accelerator chip

#artificialintelligence

Ahead of the Hot Chips conference this week, photonics chip startup Lightmatter revealed the first technical details about its upcoming test chip. Unlike conventional processors and graphics cards, the test chip uses light to send signals, promising orders of magnitude higher performance and efficiency. The technology underpinning the test chip -- photonic integrated circuits -- stems from a 2017 paper coauthored by Lightmatter CEO and MIT alumnus Nicholas Harris that described a novel way to perform machine learning workloads using optical interference. Chips like the test chip, which is on track for a fall 2021 release, require only a limited amount of energy because light produces less heat than electricity. They also benefit from reduced latency and are less susceptible to changes in temperature, electromagnetic fields, and noise.


Lightelligence releases prototype of its optical AI accelerator chip

#artificialintelligence

Accelerator chips that use light rather than electrons to carry out computations promise to supercharge AI model training and inference. In theory, they could process algorithms at the speed of light -- dramatically faster than today's speediest logic-gate circuits -- but so far, light's unpredictability has foiled most attempts to emulate transistors optically. Boston-based Lightelligence, though, claims it's achieved a measure of success with its optical AI chip, which today debuts in prototype form. It says that latency is improved up to 10,000 times compared with traditional hardware, and it estimates power consumption at "orders of magnitude" lower. The technology underpinning it has its origins in a 2017 paper coauthored by CEO Yichen Shen.