redistribution
Physically-Based Simulation of Automotive LiDAR
Dudzik, L., Roschani, M., Sielemann, A., Trampert, K., Ziehn, J., Beyerer, J., Neumann, C.
Abstract--We present an analytic model for simulating automotive time-of-flight (T oF) LiDAR that includes blooming, echo pulse width, and ambient light, along with steps to determine model parameters systematically through optical laboratory measurements. The model uses physically based rendering (PBR) in the near-infrared domain. It assumes single-bounce reflections and retroreflections over rasterized rendered images from shading or ray tracing, including light emitted from the sensor as well as stray light from other, non-correlated sources such as sunlight. Beams from the sensor and sensitivity of the receiving diodes are modeled with flexible beam steering patterns and with non-vanishing diameter . Different (all non-real time) computational approaches can be chosen based on system properties, computing capabilities, and desired output properties. Model parameters include system-specific properties, namely the physical spread of the LiDAR beam, combined with the sensitivity of the receiving diode; the intensity of the emitted light; the conversion between the intensity of reflected light and the echo pulse width; and scenario parameters such as environment lighting, positioning, and surface properties of the target(s) in the relevant infrared domain. System-specific properties of the model are determined from laboratory measurements of the photometric luminance on different target surfaces aligned with a goniometer at 0.01 resolution, which marks the best available resolution for measuring the beam pattern. The approach is calibrated for and tested on two automotive LiDAR systems, the V aleo Scala Gen. 2 and the Blickfeld Cube 1. Both systems differ notably in their properties and available interfaces, but the relevant model parameters could be extracted successfully.
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- Europe > Netherlands > North Holland > Amsterdam (0.04)
- Transportation > Ground > Road (0.68)
- Automobiles & Trucks (0.68)
- Information Technology > Robotics & Automation (0.46)
- Aerospace & Defense > Aircraft (0.46)
Hey AI, Generate Me a Hardware Code! Agentic AI-based Hardware Design & Verification
Gadde, Deepak Narayan, Radhakrishna, Keerthan Kopparam, Viswambharan, Vaisakh Naduvodi, Kumar, Aman, Lettnin, Djones, Kunz, Wolfgang, Simon, Sebastian
Modern Integrated Circuits (ICs) are becoming increasingly complex, and so is their development process. Hardware design verification entails a methodical and disciplined approach to the planning, development, execution, and sign-off of functionally correct hardware designs. This tedious process requires significant effort and time to ensure a bug-free tape-out. The field of Natural Language Processing has undergone a significant transformation with the advent of Large Language Models (LLMs). These powerful models, often referred to as Generative AI (GenAI), have revolutionized how machines understand and generate human language, enabling unprecedented advancements in a wide array of applications, including hardware design verification. This paper presents an agentic AI-based approach to hardware design verification, which empowers AI agents, in collaboration with Humain-in-the-Loop (HITL) intervention, to engage in a more dynamic, iterative, and self-reflective process, ultimately performing end-to-end hardware design and verification. This methodology is evaluated on five open-source designs, achieving over 95% coverage with reduced verification time while demonstrating superior performance, adaptability, and configurability.
- Asia > India (0.05)
- South America > Brazil > Amazonas > Manaus (0.04)
- Europe > Germany > Rhineland-Palatinate > Landau (0.04)
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Agents (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.34)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Asia > Middle East > Jordan (0.04)
- (5 more...)
- Education (0.46)
- Leisure & Entertainment > Games (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
Supplementary Material: Redistribution of Weights and Activations for AdderNet Quantization
A.3 Analysis on the Ratio of Discarded Outliers As we discussed in the subsection of outliers clamp for activations, the value Table 3: Analysis on the ratio of discarded outliers in activations. Besides, the comparisons with more CNN quantization methods are also supplemented. In Figure 1, we visualize the histogram of the weights and activations in AdderNet. Our AdderNet quantization method has one major limitation: as the number of bits decreases, the accuracy loss of the quantization model will increase. As for the societal impacts, the proposed quantization method can further reduce the energy consumption of AdderNet with a lower quantized accuracy loss.
CAMP-HiVe: Cyclic Pair Merging based Efficient DNN Pruning with Hessian-Vector Approximation for Resource-Constrained Systems
Uddin, Mohammad Helal, Ghanta, Sai Krishna, Seymour, Liam, Baidya, Sabur
Deep learning algorithms are becoming an essential component of many artificial intelligence (AI) driven applications, many of which run on resource-constrained and energy-constrained systems. For efficient deployment of these algorithms, although different techniques for the compression of neural network models are proposed, neural pruning is one of the fastest and effective methods, which can provide a high compression gain with minimal cost. To harness enhanced performance gain with respect to model complexity, we propose a novel neural network pruning approach utilizing Hessian-vector products that approximate crucial curvature information in the loss function, which significantly reduces the computation demands. By employing a power iteration method, our algorithm effectively identifies and preserves the essential information, ensuring a balanced trade-off between model accuracy and computational efficiency. Herein, we introduce CAMP-HiVe, a cyclic pair merging-based pruning with Hessian Vector approximation by iteratively consolidating weight pairs, combining significant and less significant weights, thus effectively streamlining the model while preserving its performance. This dynamic, adaptive framework allows for real-time adjustment of weight significance, ensuring that only the most critical parameters are retained. Our experimental results demonstrate that our proposed method achieves significant reductions in computational requirements while maintaining high performance across different neural network architectures, e.g., ResNet18, ResNet56, and MobileNetv2, on standard benchmark datasets, e.g., CIFAR10, CIFAR-100, and ImageNet, and it outperforms the existing state-of-the-art neural pruning methods.
- North America > United States > Georgia > Clarke County > Athens (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Kentucky > Jefferson County > Louisville (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.92)
- Health & Medicine > Therapeutic Area > Immunology > HIV (0.43)
Data for Inclusion: The Redistributive Power of Data Economics
While credit is often portrayed as the fuel of development, access to credi t is unevenly distributed -- not merely as a function of income or collateral, but increasingly as a function of data visibility. In this context, the core hypothesis of this paper is that data, when governed ethically and reused efficiently, operates as a re distributive economic asset. The idea that being poor is more expensive is not new; it has been conceptualized as the "poverty premium" -- where low - income individuals pay higher effective prices for credit, insurance, and other services (Carrière - Swallow & Haksar, 2019). Y et what has ch anged is the infrastructure of decision - making: creditworthiness is increasingly determined by algorithmic systems whose inputs are not equitably distributed. Individuals with limited credit histories or fragmented digital footprints remain invisible, not due to financial incapacity, but due to informational exclusion. This asymmetry is not merely a market failure -- it is a structural inequality encoded in data regimes. W e argue that positive credit data -- payment histories, utilization patterns, and account stability -- constitutes a nonrival input that, once generated, can be reused across institutions at near - zero marginal cost without diminishing its value (Jones & Tonetti, 2020; Acemoglu et al., 2023). However, the ability to extract value from such data remains highly uneven. In traditional credit markets, the absence of negative signals penalizes borrowers more than the presence of positive behavior benefits them.
- South America > Uruguay (0.06)
- North America > United States > Tennessee > Davidson County > Nashville (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- (3 more...)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Data Science > Data Mining (0.46)
Decentralized Collective World Model for Emergent Communication and Coordination
Nomura, Kentaro, Aoki, Tatsuya, Taniguchi, Tadahiro, Horii, Takato
We propose a fully decentralized multi-agent world model that enables both symbol emergence for communication and coordinated behavior through temporal extension of collective predictive coding. Unlike previous research that focuses on either communication or coordination separately, our approach achieves both simultaneously. Our method integrates world models with communication channels, enabling agents to predict environmental dynamics, estimate states from partial observations, and share critical information through bidirectional message exchange with contrastive learning for message alignment. Using a two-agent trajectory drawing task, we demonstrate that our communication-based approach outperforms non-communicative models when agents have divergent perceptual capabilities, achieving the second-best coordination after centralized models. Importantly, our decentralized approach with constraints preventing direct access to other agents' internal states facilitates the emergence of more meaningful symbol systems that accurately reflect environmental states. These findings demonstrate the effectiveness of decentralized communication for supporting coordination while developing shared representations of the environment.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Japan > Honshū > Kansai > Osaka Prefecture > Osaka (0.04)
- (2 more...)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- Europe > Germany > North Rhine-Westphalia > Upper Bavaria > Munich (0.04)
- Asia > Middle East > Jordan (0.04)
- (5 more...)
- Education (0.46)
- Leisure & Entertainment > Games (0.30)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.34)
Meaningless Tokens, Meaningful Gains: How Activation Shifts Enhance LLM Reasoning
Shi, Zeru, Wan, Yingjia, Wang, Zhenting, Wang, Qifan, Yang, Fan, Kreiss, Elisa, Tang, Ruixiang
Motivated by the puzzling observation that inserting long sequences of meaningless tokens before the query prompt can consistently enhance LLM reasoning performance, this work analyzes the underlying mechanism driving this phenomenon and based on these insights proposes a more principled method that allows for similar performance gains. First, we find that the improvements arise from a redistribution of activations in the LLM's MLP layers, where near zero activations become less frequent while large magnitude activations increase. This redistribution enhances the model's representational capacity by suppressing weak signals and promoting stronger, more informative ones. Building on this insight, we propose the Activation Redistribution Module (ARM), a lightweight inference-time technique that modifies activations directly without altering the input sequence. ARM adaptively identifies near-zero activations after the non-linear function and shifts them outward, implicitly reproducing the beneficial effects of meaningless tokens in a controlled manner. Extensive experiments across diverse benchmarks and model architectures clearly show that ARM consistently improves LLM performance on reasoning tasks while requiring only a few lines of simple code to implement. Our findings deliver both a clear mechanistic explanation for the unexpected benefits of meaningless tokens and a simple yet effective technique that harnesses activation redistribution to further improve LLM performance.
Tweedie Regression for Video Recommendation System
Zheng, Yan, Chen, Qiang, Niu, Chenglei
Modern recommendation systems aim to increase click-through rates (CTR) for better user experience, through commonly treating ranking as a classification task focused on predicting CTR. However, there is a gap between this method and the actual objectives of businesses across different sectors. In video recommendation services, the objective of video on demand (VOD) extends beyond merely encouraging clicks, but also guiding users to discover their true interests, leading to increased watch time. And longer users watch time will leads to more revenue through increased chances of presenting online display advertisements. This research addresses the issue by redefining the problem from classification to regression, with a focus on maximizing revenue through user viewing time. Due to the lack of positive labels on recommendation, the study introduces Tweedie Loss Function, which is better suited in this scenario than the traditional mean square error loss. The paper also provides insights on how Tweedie process capture users diverse interests. Our offline simulation and online A/B test revealed that we can substantially enhance our core business objectives: user engagement in terms of viewing time and, consequently, revenue. Additionally, we provide a theoretical comparison between the Tweedie Loss and the commonly employed viewing time weighted Logloss, highlighting why Tweedie Regression stands out as an efficient solution. We further outline a framework for designing a loss function that focuses on a singular objective.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- South America > Colombia > Meta Department > Villavicencio (0.04)
- North America > United States > Virginia > Arlington County > Arlington (0.04)
- (3 more...)
- Banking & Finance > Insurance (0.68)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.46)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.47)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.46)