Goto

Collaborating Authors

 computing


The Download: Quantum computing for health, and why the world doesn't recycle more nuclear waste

MIT Technology Review

The Download: Quantum computing for health, and why the world doesn't recycle more nuclear waste Plus: The FBI has admitted it's buying Americans' location data. In a laboratory on the outskirts of Oxford, a quantum computer built from atoms and light awaits its moment. The device is small but powerful--and also very valuable. Infleqtion, the company that owns it, is hoping its abilities will win $5 million at a competition next week. The prize will go to the quantum computer that can solve real health care problems that conventional "classical" computers are unable to solve. But there can be only one big winner--if there is a winner at all.


Can quantum computers now solve health care problems? We'll soon find out.

MIT Technology Review

I'm standing in front of a quantum computer built out of atoms and light at the UK's National Quantum Computing Centre on the outskirts of Oxford. On a laboratory table, a complex matrix of mirrors and lenses surrounds a Rubik's Cube-size cell where 100 cesium atoms are suspended in grid formation by a carefully manipulated laser beam. The cesium atom setup is so compact that I could pick it up, carry it out of the lab, and put it on the backseat of my car to take home. I'd be unlikely to get very far, though.


Pair win Turing Award for computer encryption breakthrough

BBC News

A US physicist and a Canadian computer scientist have won this year's Turing Award for their invention of a form of seemingly unbreakable encryption. Charles H Bennett and Gilles Brassard's work, which dates back to 1984, is known as quantum cryptography and has redefined secure communication and computing, the award's body said. Scientists believe their work will be central to electronic communications in a world that depends heavily on data-sharing, but which for years has been trying to develop more powerful quantum computers. The Turing Award, named after the mathematician and code-breaker Alan Turing, is known as the Nobel Prize of computing. It comes with a $1m (£800,000) prize.


Short-Dot: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products

Neural Information Processing Systems

Faced with saturation of Moore's law and increasing size and dimension of data, system designers have increasingly resorted to parallel and distributed computing to reduce computation time of machine-learning algorithms. However, distributed computing is often bottle necked by a small fraction of slow processors called stragglers that reduce the speed of computation because the fusion node has to wait for all processors to complete their processing. To combat the effect of stragglers, recent literature proposes introducing redundancy in computations across processors, e.g., using repetition-based strategies or erasure codes. The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers. In this paper, we propose a novel technique - that we call Short-Dot - to introduce redundant computations in a coding theory inspired fashion, for computing linear transforms of long vectors. Instead of computing long dot products as required in the original linear transform, we construct a larger number of redundant and short dot products that can be computed more efficiently at individual processors. Further, only a subset of these short dot products are required at the fusion node to finish the computation successfully. We demonstrate through probabilistic analysis as well as experiments on computing clusters that Short-Dot offers significant speed-up compared to existing techniques. We also derive trade-offs between the length of the dot-products and the resilience to stragglers (number of processors required to finish), for any such strategy and compare it to that achieved by our strategy.


The Download: glass chips and "AI-free" logos

MIT Technology Review

Plus: Elizabeth Warren wants answers on xAI's access to military data. Human-made glass is thousands of years old. But it's now poised to find its way into the AI chips used in the world's newest and largest data centers. This year, a South Korean company called Absolics will start producing special glass panels that make next-generation computing hardware more powerful and efficient. Other companies, including Intel, are also pushing forward in this area. If all goes well, the technology could reduce the energy demands of chips in AI data centers--and even consumer laptops and mobile devices.


Nvidia's Deal With Meta Signals a New Era in Computing Power

WIRED

The days of tech giants buying up discrete chips are over. AI companies now need GPUs, CPUs, and everything in between. Ask anyone what Nvidia makes, and they're likely to first say "GPUs." For decades, the chipmaker has been defined by advanced parallel computing, and the emergence of generative AI and the resulting surge in demand for GPUs has been a boon for the company . But Nvidia's recent moves signal that it's looking to lock in more customers at the less compute-intensive end of the AI market--customers who don't necessarily need the beefiest, most powerful GPUs to train AI models, but instead are looking for the most efficient ways to run agentic AI software.


Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework

Neural Information Processing Systems

Coded computing has emerged as a promising framework for tackling significant challenges in large-scale distributed computing, including the presence of slow, faulty, or compromised servers. In this approach, each worker node processes a combination of the data, rather than the raw data itself. The final result then is decoded from the collective outputs of the worker nodes. However, there is a significant gap between current coded computing approaches and the broader landscape of general distributed computing, particularly when it comes to machine learning workloads. To bridge this gap, we propose a novel foundation for coded computing, integrating the principles of learning theory, and developing a framework that seamlessly adapts with machine learning applications. In this framework, the objective is to find the encoder and decoder functions that minimize the loss function, defined as the mean squared error between the estimated and true values. Facilitating the search for the optimum decoding and functions, we show that the loss function can be upper-bounded by the summation of two terms: the generalization error of the decoding function and the training error of the encoding function. Focusing on the second-order Sobolev space, we then derive the optimal encoder and decoder.