Goto

Collaborating Authors

 huawei


OpenLane-V2: Supplementary Material A Overview

Neural Information Processing Systems

Our supplementary includes author statement, licensing, and implementation details of benchmark results for reproducibility. We bear all responsibilities for licensing, distributing, and maintaining our dataset. The proposed dataset is under the CC BY -NC-SA 4.0 license, while the code in the repository is For what purpose was the dataset created? The dataset comprises various types of annotations, including instances and topology relationships. Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., Who funded the creation of the dataset?


Trump's reprieve for Nvidia's H200 spurred by Huawei's AI gains

The Japan Times

Nvidia CEO Jensen Huang speaks alongside U.S. President Donald Trump at the White House in Washington on April 30. U.S. President Donald Trump decided to let Nvidia sell its H200 artificial intelligence chips to China after concluding the move carried a lower security risk because the company's Chinese archrival, Huawei Technologies, already offers AI systems with comparable performance, according to a person familiar with the deliberations. Administration officials who weighed whether to clear Nvidia's H200 had considered multiple possible scenarios, factoring in the views of national security hawks in Washington, said the person. Options ranged from exporting zero AI chips to China to allowing exports of everything to flood the Chinese market and overwhelm Huawei. Ultimately the policy backed by Trump called for clearing H200s to China while holding back the latest Nvidia chips for American customers, the person said.




Amber Pruner: Leveraging N:M Activation Sparsity for Efficient Prefill in Large Language Models

An, Tai, Cai, Ruwu, Zhang, Yanzhe, Liu, Yang, Chen, Hao, Xie, Pengcheng, Chang, Sheng, Yao, Yiwu, Wang, Gongyi

arXiv.org Artificial Intelligence

In the era of large language models (LLMs), N:M sparsity has emerged as a structured compression technique critical for accelerating inference. While prior work has primarily focused on weight sparsity, it often suffers from significant accuracy degradation. Activation sparsity, though promising, is typically training-dependent and faces challenges in generalization. To address these limitations, we introduce Amber Pruner, a training-free N:M activation sparsity method designed specifically for the prefill stage, targeting the acceleration of linear projection layers in LLMs. Extensive experiments across multiple models and sparsity ratios (2:4, 4:8, and 8:16) demonstrate that Amber Pruner can effectively sparsify and accelerate more than 55% of linear computations without requiring model retraining. To further enhance generality and efficiency, we propose Outstanding-sparse, a unified framework that integrates Amber Pruner with post-training W8A8 quantization. Our approach preserves strong performance across a range of downstream tasks, with notable advantages in generative tasks. This work pioneers a new frontier in activation sparsity, providing foundational insights that are poised to guide the co-evolution of algorithms and architectures in the design of next-generation AI systems.


Intrinsic Fingerprint of LLMs: Continue Training is NOT All You Need to Steal A Model!

Yoon, Do-hyeon, Chun, Minsoo, Allen, Thomas, Müller, Hans, Wang, Min, Sharma, Rajesh

arXiv.org Artificial Intelligence

Large language models (LLMs) face significant copyright and intellectual property challenges as the cost of training increases and model reuse becomes prevalent. While watermarking techniques have been proposed to protect model ownership, they may not be robust to continue training and development, posing serious threats to model attribution and copyright protection. This work introduces a simple yet effective approach for robust LLM fingerprinting based on intrinsic model characteristics. We discover that the standard deviation distributions of attention parameter matrices across different layers exhibit distinctive patterns that remain stable even after extensive continued training. These parameter distribution signatures serve as robust fingerprints that can reliably identify model lineage and detect potential copyright infringement. Our experimental validation across multiple model families demonstrates the effectiveness of our method for model authentication. Notably, our investigation uncovers evidence that a recently Pangu Pro MoE model released by Huawei is derived from Qwen-2.5 14B model through upcycling techniques rather than training from scratch, highlighting potential cases of model plagiarism, copyright violation, and information fabrication. These findings underscore the critical importance of developing robust fingerprinting methods for protecting intellectual property in large-scale model development and emphasize that deliberate continued training alone is insufficient to completely obscure model origins.


TSMC could face 1 billion or more fine from U.S. probe, sources say

The Japan Times

Taiwan Semiconductor Manufacturing Co. (TSMC) could face a penalty of 1 billion or more to settle a U.S. export control investigation over a chip it made that ended up inside a Huawei artificial intelligence processor, according to two people familiar with the matter. The U.S. Department of Commerce has been investigating the world's biggest contract chipmaker's work for China-based Sophgo, the sources said. The design company's TSMC-made chip matched one found in Huawei's high-end Ascend 910B artificial intelligence processor, according to the people, who requested anonymity because they were not authorized to speak publicly about the matter. Huawei -- a company at the center of China's AI chip ambitions that has been accused of sanctions busting and trade secret theft -- is on a U.S. trade list that restricts it from receiving goods made with U.S. technology. TSMC made nearly 3 million chips in recent years that matched the design ordered by Sophgo and likely ended up with Huawei, according to Lennart Heim, a researcher at RAND's Technology and Security and Policy Center in Arlington, Virginia, who is tracking Chinese developments in AI.


Late Breaking Results: The Art of Beating the Odds with Predictor-Guided Random Design Space Exploration

Arnold, Felix, Bouvier, Maxence, Amaudruz, Ryan, Andri, Renzo, Cavigelli, Lukas

arXiv.org Artificial Intelligence

Late Breaking Results: The Art of Beating the Odds with Predictor-Guided Random Design Space Exploration Felix Arnold Huawei, Switzerland Maxence Bouvier Huawei, Switzerland Ryan Amaudruz Huawei, Switzerland Renzo Andri Huawei, Switzerland Lukas Cavigelli Huawei, Switzerland Abstract --This work introduces an innovative method for improving combinational digital circuits through random exploration in MIG-based synthesis. High-quality circuits are crucial for performance, power, and cost, making this a critical area of active research. Our approach incorporates next-state prediction and iterative selection, significantly accelerating the synthesis process. This novel method achieves up to 14 synthesis speedup and up to 20.94% better MIG minimization on the EPFL Combinational Benchmark Suite compared to state-of-the-art techniques. We further explore various predictor models and show that increased prediction accuracy does not guarantee an equivalent increase in synthesis quality of results or speedup, observing that randomness remains a desirable factor .


BYD's Free Self-Driving Tech Might Not Be Such a Boon After All

WIRED

Not only has China's largest EV maker BYD unveiled good, better, and best tiers for its advanced driver-assistance system (ADAS), it announced last week that the tech--marketed somewhat immodestly as "God's Eye"--will now be fitted as standard to 21 of BYD's 30 cars split across four brands. Even the 9,500 Seagull hatchback, the cheapest of BYD's EVs, will ship with the base level of God's Eye at no extra cost, while the 233,500 Yangwang U9 electric supercar will get the top-tier iteration. However, BYD's ADAS system could be as misleadingly named as Tesla's Full Self-Driving (FSD). Including ADAS for free will no doubt rile BYD's smaller rivals in China's innovative but cutthroat auto market. Comparatively low-tech Toyota, VW, and Nissan may weaken further, and Tesla--which has yet to gain permission for FSD in China--could also struggle.


Revealed: The best inventions of 2024 - from Tesla's futuristic Robotaxi to Huawei's tri-fold smartphone

Daily Mail - Science & tech

From the steam engine in 1712 to the first ever iPhone in 2007, each year sees the birth of ever more incredible inventions. And after a year of mind-boggling tech, it's clear that 2024 has been no exception to the rule. The last 12 months have seen brilliant minds from around the world creating some mind-blowing and potentially world-changing breakthroughs. With 2024 almost at its end, MailOnline has taken a look back at some of this year's coolest gadgets and most exciting innovations. From an AI for designing proteins to a real-life pair of Wallace and Gromit's'techno trousers', these inventions are a glimpse of how we all might be living in the future. And when it comes to big breakthroughs, this year has been a resounding success for billionaire Elon Musk.