Goto

Collaborating Authors

 processor


Could AI Data Centers Be Moved to Outer Space?

WIRED

Could AI Data Centers Be Moved to Outer Space? Massive data centers for generative AI are bad for the Earth. Data centers are being built at a frantic pace all over the world, driven by the AI boom. These facilities consume staggering amounts of electricity. By 2028, AI servers alone may use as much energy as 22 percent of US households.


Alliwava GH8 review: Ryzen 9 muscle in a shockingly small PC

PCWorld

When you purchase through links in our articles, we may earn a small commission. The Alliwava GH8 is a good example of how much performance is possible today in the smallest of spaces. The Alliwava GH8 is a good example of how much performance is possible today in the smallest of spaces. With the Ryzen 9 8945HS, it not only offers powerful CPU performance, but also added value for AI applications thanks to the improved NPU. In doing so, it leaves many competitors behind in terms of connectivity and cooling management.


I tested Panther Lake. You're going to want this

PCWorld

PCWorld tested Intel's new Panther Lake Core Ultra X9 388H processor, which delivers gaming laptop performance through integrated graphics comparable to Nvidia GeForce 4050 chips. The chip achieves impressive battery life up to 27 hours while maintaining strong performance, with AI frame generation boosting gaming from 52 to 92 fps in titles like Cyberpunk. Panther Lake faces competition from AMD's Ryzen AI Max and Qualcomm's Snapdragon X2, but Intel's early 2025 release provides significant market advantage.


US approves sale of Nvidia's advanced AI chips to China

BBC News

US approves sale of Nvidia's advanced AI chips to China The US government has given chip giant Nvidia the green light to sell its advanced artificial intelligence (AI) processors in China, the Department of Commerce said on Tuesday. The H200, Nvidia's second-most-advanced semiconductor, had been restricted by Washington over concerns that it would give China's technology industry and military an edge over the US. The Commerce Department said the chips can be shipped to China granted that there is sufficient supply of the processors in the US. President Donald Trump said last month that he would allow the chip sales to approved customers in China and collect a 25% fee. Nvidia's spokesperson told the BBC that the company welcomed the move, saying it will benefit manufacturing and jobs in the US.


Shallow-circuit Supervised Learning on a Quantum Processor

Candelori, Luca, Majumder, Swarnadeep, Mezzacapo, Antonio, Moreno, Javier Robledo, Musaelian, Kharen, Nagarajan, Santhanam, Pinnamaneni, Sunil, Sharma, Kunal, Villani, Dario

arXiv.org Machine Learning

Quantum computing has long promised transformative advances in data analysis, yet practical quantum machine learning has remained elusive due to fundamental obstacles such as a steep quantum cost for the loading of classical data and poor trainability of many quantum machine learning algorithms designed for near-term quantum hardware. In this work, we show that one can overcome these obstacles by using a linear Hamiltonian-based machine learning method which provides a compact quantum representation of classical data via ground state problems for k-local Hamiltonians. We use the recent sample-based Krylov quantum diagonalization method to compute low-energy states of the data Hamiltonians, whose parameters are trained to express classical datasets through local gradients. We demonstrate the efficacy and scalability of the methods by performing experiments on benchmark datasets using up to 50 qubits of an IBM Heron quantum processor.


AMD unveils Ryzen AI Halo, an uber-powerful mini PC for AI

PCWorld

PCWorld reports that AMD unveiled the Ryzen AI Halo, a powerful mini PC developer platform designed for running large AI models locally. The device features AMD's Ryzen AI Max+ processor with 128GB of unified memory, multiple operating system support, and advanced ROCm software for AI development. Expected to launch in the second quarter of this year, the Ryzen AI Halo represents AMD's strategic push into AI development tools. AMD may not be selling PCs, but it's providing a reference design to AI developers based on its Ryzen AI Max+ processor. Known as the Ryzan AI Halo, the small PC is an officially an AI developer platform capable of running models with up to 200 billion parameters locally, according to AMD chief executive Lisa Su, who introduced the device in her opening keynote for the CES 2026 show here in Las Vegas. Inside is 128GB of unified memory, Su said. It's not necessarily a Windows device; Su said that it will run "multiple operating systems natively," as well as open-source development tools and hundreds of AI models. It wil ship with the most advanced version of AMD's ROCm software. "For all of you who are wondering, Halo is launching in the second quarter of this year, and we can't wait for folks to get their hands on them," Su said.


AMD adds Ryzen 7 9850X3D, new AI Max chips to boost PC punch

PCWorld

When you purchase through links in our articles, we may earn a small commission. AMD is doubling down on some of its most successful processor lines. AMD is using CES 2026 as a launch vehicle to add several of its popular Ryzen AI Max+ and Ryzen 9000 X3D processors to its stack, but the real story might be the performance improvements AMD is claiming as part of its updated ROCm software instead. AMD is adding two processors to its Ryzen AI Max+ series: the Ryzen AI Max+ 392, and the Ryzen AI Max+ 388. It is also tucking the Ryzen 7 9850X3D inside its matrix of Ryzen 9000 X3D gaming processors, hopefully adding a more affordable alternative.


ComBack: A Versatile Dataset for Enhancing Compiler Backend Development Efficiency

Neural Information Processing Systems

Compiler backends are tasked with generating executable machine code for processors. With the proliferation of diverse processors, it is imperative for programmers to tailor specific compiler backends to accommodate each one. Meanwhile, compiler backend development is a laborious and time-consuming task, lacking effective automation methods. Although language models have demonstrated strong abilities in code related tasks, the lack of appropriate datasets for compiler backend development limits the application of language models in this field.In this paper, we introduce ComBack, the first public dataset designed for improving compiler backend development capabilities of language models. ComBack includes 178 backends for mainstream compilers and three tasks including statement-level completion, next-statement suggestion and code generation, representing common development scenarios. We conducted experiments by fine-tuning six pre-trained language models with ComBack, demonstrating its effectiveness in enhancing model accuracy across the three tasks. We further evaluated the top-performing model(CodeT5+) across the three tasks for new targets, comparing its accuracy with conventional methods (Fork-Flow), ChatGPT-3.5-Turbo,


Did Microsoft do anything right in 2025? Wins, fails, and WTF moments

PCWorld

When you purchase through links in our articles, we may earn a small commission. Did Microsoft do anything right in 2025? From AI overload to price hikes, little Microsoft did during 2025 was worth applauding. Can you name one single, solitary, success Microsoft had in 2025? In the years that PCWorld has catalogued Microsoft's wins, failures, and head-scratching "WTF" moments, there's always been a mix of high points and lows.


HH-PIM: Dynamic Optimization of Power and Performance with Heterogeneous-Hybrid PIM for Edge AI Devices

Jeon, Sangmin, Lee, Kangju, Lee, Kyeongwon, Lee, Woojoo

arXiv.org Artificial Intelligence

--Processing-in-Memory (PIM) architectures offer promising solutions for efficiently handling AI applications in energy-constrained edge environments. While traditional PIM designs enhance performance and energy efficiency by reducing data movement between memory and processing units, they are limited in edge devices due to continuous power demands and the storage requirements of large neural network weights in SRAM and DRAM. Hybrid PIM architectures, incorporating nonvolatile memories like MRAM and ReRAM, mitigate these limitations but struggle with a mismatch between fixed computing resources and dynamically changing inference workloads. T o address these challenges, this study introduces a Heterogeneous-Hybrid PIM ( HH-PIM) architecture, comprising high-performance MRAM-SRAM PIM modules and low-power MRAM-SRAM PIM modules. We further propose a data placement optimization algorithm that dynamically allocates data based on computational demand, maximizing energy efficiency. FPGA prototyping and power simulations with processors featuring HH-PIM and other PIM types demonstrate that the proposed HH-PIM achieves up to 60.43% average energy savings over conventional PIMs while meeting application latency requirements. These results confirm HH-PIM's suitability for adaptive, energy-efficient AI processing in edge devices. With the advent of artificial intelligence (AI), real-world applications are rapidly expanding, fueling a trend to embed AI capabilities into IoT devices across diverse fields. However, traditional server-centric data processing, such as cloud computing, faces significant energy and latency challenges due to processing and communication overloads.