computing
Nvidia's Deal With Meta Signals a New Era in Computing Power
The days of tech giants buying up discrete chips are over. AI companies now need GPUs, CPUs, and everything in between. Ask anyone what Nvidia makes, and they're likely to first say "GPUs." For decades, the chipmaker has been defined by advanced parallel computing, and the emergence of generative AI and the resulting surge in demand for GPUs has been a boon for the company . But Nvidia's recent moves signal that it's looking to lock in more customers at the less compute-intensive end of the AI market--customers who don't necessarily need the beefiest, most powerful GPUs to train AI models, but instead are looking for the most efficient ways to run agentic AI software.
- North America > United States > California (0.15)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Asia > China (0.05)
- North America > United States > California > Alameda County > Berkeley (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- (22 more...)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- North America > United States > Virginia (0.04)
- Asia > Afghanistan > Parwan Province > Charikar (0.04)
- Information Technology > Data Science (0.93)
- Information Technology > Communications (0.69)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Mathematical & Statistical Methods (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.46)
How to finally get a grasp on quantum computing
If your New Year's resolution is to understand quantum computing this year, take a cue from a 9-year-old podcaster talking to some of the biggest minds in the field, says quantum columnist Karmela Padavic-Callaghan Quantum computing seems to pop up in the news pretty often these days. You've probably seen quantum chips gracing your feeds and their odd, steampunk-ish cooling systems in the pages of magazines and newspapers. Politicians and business leaders are peppering their announcements with the word "quantum" more frequently, too. If you're feeling a little confused about it all, it's a good year for a New Year's resolution to finally figure out what quantum computing is all about. This is an ambitious goal, and the timing certainly makes sense.
- Information Technology > Hardware (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence (1.00)
Commodore 64 Ultimate Review: An Astonishing Remake
The reborn Commodore 64 is an astonishing remake--but daunting if you weren't there the first time around. "Digital detox" approach is compelling. It's hard to overstate just how seismic an impact the Commodore 64 had on home computing. Launched in 1982, the 8-bit machine--iconic in its beige plastic shell with integrated keyboard--went on to become the best-selling personal computer of all time . Despite the success, manufacturer Commodore International folded in 1994, with rights to the name floating around for years.
- North America > United States > California (0.04)
- Europe > Slovakia (0.04)
- Europe > Czechia (0.04)
Coded Computing for Resilient Distributed Computing: A Learning-Theoretic Framework
Coded computing has emerged as a promising framework for tackling significant challenges in large-scale distributed computing, including the presence of slow, faulty, or compromised servers. In this approach, each worker node processes a combination of the data, rather than the raw data itself. The final result then is decoded from the collective outputs of the worker nodes. However, there is a significant gap between current coded computing approaches and the broader landscape of general distributed computing, particularly when it comes to machine learning workloads. To bridge this gap, we propose a novel foundation for coded computing, integrating the principles of learning theory, and developing a framework that seamlessly adapts with machine learning applications.
LibAMM: Empirical Insights into Approximate Computing for Accelerating Matrix Multiplication
Matrix multiplication (MM) is pivotal in fields from deep learning to scientific computing, driving the quest for improved computational efficiency. Accelerating MM encompasses strategies like complexity reduction, parallel and distributed computing, hardware acceleration, and approximate computing techniques, namely AMM algorithms. Amidst growing concerns over the resource demands of large language models (LLMs), AMM has garnered renewed focus. However, understanding the nuances that govern AMM's effectiveness remains incomplete. This study delves into AMM by examining algorithmic strategies, operational specifics, dataset characteristics, and their application in real-world tasks.
Google asks UK experts to find uses for its powerful quantum tech
Google has announced plans to team up with the UK to invite researchers to come up with uses for the tech giant's state-of-the-art quantum chip Willow. It is one of several firms competing to develop a powerful quantum computer - which is seen as an exciting new frontier in the future of computing. Researchers hope they will be able to crack problems in fields such as chemistry and medicine which are impossible for current computers to solve. Professor Paul Stevenson of the University of Surrey - who had no involvement with the agreement - told the BBC it was great news for UK researchers. The collaboration between Google and the UK's national lab for quantum computing means more researchers will get access to the technology.
- North America > Central America (0.15)
- Oceania > Australia (0.06)
- Europe > United Kingdom > Wales (0.06)
- (15 more...)
- Information Technology (0.50)
- Leisure & Entertainment > Sports (0.44)
- Government > Regional Government > Europe Government > United Kingdom Government (0.30)
Mitigating Bias in Graph Hyperdimensional Computing
Liu, Yezi, Chung, William Youngwoo, Ni, Yang, Chen, Hanning, Imani, Mohsen
Graph hyperdimensional computing (HDC) has emerged as a promising paradigm for cognitive tasks, emulating brain-like computation with high-dimensional vectors known as hypervectors. While HDC offers robustness and efficiency on graph-structured data, its fairness implications remain largely unexplored. In this paper, we study fairness in graph HDC, where biases in data representation and decision rules can lead to unequal treatment of different groups. We show how hypervector encoding and similarity-based classification can propagate or even amplify such biases, and we propose a fairness-aware training framework, FairGHDC, to mitigate them. FairGHDC introduces a bias correction term, derived from a gap-based demographic-parity regularizer, and converts it into a scalar fairness factor that scales the update of the class hypervector for the ground-truth label. This enables debiasing directly in the hypervector space without modifying the graph encoder or requiring backpropagation. Experimental results on six benchmark datasets demonstrate that FairGHDC substantially reduces demographic-parity and equal-opportunity gaps while maintaining accuracy comparable to standard GNNs and fairness-aware GNNs. At the same time, FairGHDC preserves the computational advantages of HDC, achieving up to about one order of magnitude ($\approx 10\times$) speedup in training time on GPU compared to GNN and fairness-aware GNN baselines.
- North America > United States > California > Orange County > Irvine (0.04)
- North America > United States > California > Monterey County > Monterey (0.04)
- Europe (0.04)
- (2 more...)
- Information Technology (0.46)
- Banking & Finance (0.46)
- Government (0.46)
The Native Spiking Microarchitecture: From Iontronic Primitives to Bit-Exact FP8 Arithmetic
The 2025 Nobel Prize in Chemistry for Metal-Organic Frameworks (MOFs) and recent breakthroughs by Huanting Wang's team at Monash University establish angstrom-scale channels as promising post-silicon substrates with native integrate-and-fire (IF) dynamics. However, utilizing these stochastic, analog materials for deterministic, bit-exact AI workloads (e.g., FP8) remains a paradox. Existing neuromorphic methods often settle for approximation, failing Transformer precision standards. To traverse the gap "from stochastic ions to deterministic floats," we propose a Native Spiking Microarchitecture. Treating noisy neurons as logic primitives, we introduce a Spatial Combinational Pipeline and a Sticky-Extra Correction mechanism. Validation across all 16,129 FP8 pairs confirms 100% bit-exact alignment with PyTorch. Crucially, our architecture reduces Linear layer latency to O(log N), yielding a 17x speedup. Physical simulations further demonstrate robustness against extreme membrane leakage (beta approx 0.01), effectively immunizing the system against the stochastic nature of the hardware.
- Personal > Honors (0.69)
- Research Report (0.64)