Goto

Collaborating Authors

 gpus


AI is changing PC graphics. Microsoft wants DirectX ready

PCWorld

PCWorld reports Microsoft is embedding AI into DirectX with new tools called DirectX Linear Algebra and DirectX Compute Graph Compiler to revolutionize game rendering. Major chip makers AMD, Intel, and Nvidia support these AI initiatives, potentially allowing integrated GPUs to compete with discrete graphics cards in gaming performance. These technologies enable dynamic shader creation, neural texture compression, and advanced upscaling that could democratize high-end graphics features like path tracing across different hardware. Games are increasingly being rendered using AI, so Microsoft is bringing AI into the way future graphics chips will render games. Microsoft introduced DirectX Linear Algebra as well as the DirectX Compute Graph Compiler into its DirectX programming interface on Thursday, with previews of each technology due later this year.


Orbital AI data centers could work, but they might ruin Earth in the process

Engadget

Samsung Galaxy Unpacked 2026 is Feb. 25 A single collision could cause a cascading effect in orbit. Elon Musk's plan to launch millions of AI satellites could be disastrous for the planet. At the start of the month, Elon Musk announced that two of his companies -- SpaceX and xAI -- were merging, and would jointly launch a constellation of 1 million satellites to operate as orbital data centers. Musk's reputation might suggest otherwise, but according to experts, such a plan isn't a complete fantasy. However, if executed at the scale suggested, some of them believe it would have devastating effects on the environment and the sustainability of low Earth Earth orbit.


Nvidia's Deal With Meta Signals a New Era in Computing Power

WIRED

The days of tech giants buying up discrete chips are over. AI companies now need GPUs, CPUs, and everything in between. Ask anyone what Nvidia makes, and they're likely to first say "GPUs." For decades, the chipmaker has been defined by advanced parallel computing, and the emergence of generative AI and the resulting surge in demand for GPUs has been a boon for the company . But Nvidia's recent moves signal that it's looking to lock in more customers at the less compute-intensive end of the AI market--customers who don't necessarily need the beefiest, most powerful GPUs to train AI models, but instead are looking for the most efficient ways to run agentic AI software.


Causes and Effects of Unanticipated Numerical Deviations in Neural Network Inference Frameworks

Neural Information Processing Systems

Hardware-specific optimizations in machine learning (ML) frameworks can cause numerical deviations of inference results. Quite surprisingly, despite using a fixed trained model and fixed input data, inference results are not consistent across platforms, and sometimes not even deterministic on the same platform. We study the causes of these numerical deviations for convolutional neural networks (CNN) on realistic end-to-end inference pipelines and in isolated experiments. Results from 75 distinct platforms suggest that the main causes of deviations on CPUs are differences in SIMD use, and the selection of convolution algorithms at runtime on GPUs. We link the causes and propagation effects to properties of the ML model and evaluate potential mitigations. We make our research code publicly available.


Birder: Communication-Efficient 1-bit Adaptive Optimizer for Practical Distributed DNN Training

Neural Information Processing Systems

Therefore, from a system-level perspective, the design ethos of a system-efficient communication-compression algorithm is that we should guarantee that the compression/decompression of the algorithm is computationally light and takes less time, and it should also be friendly to efficient collective communication primitives.






Appendix

Neural Information Processing Systems

Algorithm M-Adam todenote Require: 2R>0: stepsize Require: 1, 2 2[0,1): exponential itssquare Require: : afixedsmall Require:L( ): Astochastic .