Not enough data to create a plot.
Try a different view from the menu above.

Plotting

Unsupervised Graph Neural Network Framework for Balanced Multipatterning in Advanced Electronic Design Automation Layouts

Helaly, Abdelrahman, Sakr, Nourhan, Madkour, Kareem, Torunoglu, Ilhami

arXiv.org Artificial Intelligence

Abstract-- Multipatterning is an essential decomposition strategy in electronic design automation (EDA) that overcomes lithographic limitations when printing dense circuit layouts. Although heuristic-based backtracking and SA T solvers can address these challenges, they often struggle to simultaneously handle both complex constraints and secondary objectives. In this study, we present a hybrid workflow that casts multipatterning as a variant of a constrained graph coloring problem with the primary objective of minimizing feature violations and a secondary objective of balancing the number of features on each mask. Our pipeline integrates two main components: (1) A GNN-based agent, trained in an unsupervised manner to generate initial color predictions, which are refined by (2) refinement strategies (a GNN-based heuristic and simulated annealing) that together enhance solution quality and balance. Experimental evaluation in both proprietary data sets and publicly available open source layouts demonstrate complete conflict-free decomposition and consistent color balancing. The proposed framework provides a reproducible, data-efficient and deployable baseline for scalable layout decomposition in EDA workflows. As semiconductor technology progresses, the demand for higher circuit densities continues to surpass the limits of conventional lithographic techniques. The ongoing reduction in feature size introduces increasingly complex manufacturing constraints, making it difficult to accurately print intricate patterns on a single mask without defects. To address these challenges, modern electronic design automation (EDA) tools and fabrication processes rely on multipatterning, which is a layout decomposition technique that ensures manufacturability while preserving design integrity. In modern integrated circuit (IC) design, Design Rule Checking (DRC) is a critical step that ensures that the physical layout complies with a set of rules derived from the manufacturing constraints. These rules include the requirements on spacing, width, enclosure, and other geometric and connectivity constraints.


Reducing Instability in Synthetic Data Evaluation with a Super-Metric in MalDataGen

da Silva, Anna Luiza Gomes, Kreutz, Diego, Diniz, Angelo, Mansilha, Rodrigo, da Fonseca, Celso Nobre

arXiv.org Artificial Intelligence

Evaluating the quality of synthetic data remains a persistent challenge in the Android malware domain due to instability and the lack of standardization among existing metrics. Experiments involving ten generative models and five balanced datasets demonstrate that the Super-Metric is more stable and consistent than traditional metrics, exhibiting stronger correlations with the actual performance of classifiers. Synthetic data generation has become an increasingly relevant strategy in cybersecurity [1], [2], [3], particularly as a way to mitigate the scarcity of real, complete, and high-quality datasets that limit the performance and generalization of machine learning models. Despite these advances, assessing the quality of synthetic data remains a complex and largely non-standardized methodological challenge [4], with no clear consensus on which metrics should be used or how to combine them consistently. The literature reports a significant fragmentation in the application of fidelity metrics, with studies identifying more than 65 distinct indicators used independently to assess fidelity [5]. This diversity hinders model-to-model comparison, reduces experimental reproducibility, and complicates the integrated interpretation of data quality.


Revisiting Fairness-aware Interactive Recommendation: Item Lifecycle as a Control Knob

Lu, Yun, Shi, Xiaoyu, Xie, Hong, Xia, Chongjun, Gong, Zhenhui, Shang, Mingsheng

arXiv.org Artificial Intelligence

This paper revisits fairness-aware interactive recommendation (e.g., TikTok, KuaiShou) by introducing a novel control knob, i.e., the lifecycle of items. We make threefold contributions. First, we conduct a comprehensive empirical analysis and uncover that item lifecycles in short-video platforms follow a compressed three-phase pattern, i.e., rapid growth, transient stability, and sharp decay, which significantly deviates from the classical four-stage model (introduction, growth, maturity, decline). Second, we introduce LHRL, a lifecycle-aware hierarchical reinforcement learning framework that dynamically harmonizes fairness and accuracy by leveraging phase-specific exposure dynamics. LHRL consists of two key components: (1) PhaseFormer, a lightweight encoder combining STL decomposition and attention mechanisms for robust phase detection; (2) a two-level HRL agent, where the high-level policy imposes phase-aware fairness constraints, and the low-level policy optimizes immediate user engagement. This decoupled optimization allows for effective reconciliation between long-term equity and short-term utility. Third, experiments on multiple real-world interactive recommendation datasets demonstrate that LHRL significantly improves both fairness and user engagement. Furthermore, the integration of lifecycle-aware rewards into existing RL-based models consistently yields performance gains, highlighting the generalizability and practical value of our approach.


Can Artificial Intelligence Accelerate Technological Progress? Researchers' Perspectives on AI in Manufacturing and Materials Science

Nelson, John P., Olugbade, Olajide, Shapira, Philip, Biddle, Justin B.

arXiv.org Artificial Intelligence

Applications of artificial intelligence or machine learning in research Modes of use Surrogate modeling for physics - based models Modeling of poorly understood phenomena Data preprocessing Large language model use Applications AI/ML as research tool Production process design, monitoring, & output prediction Part design & properties prediction Materials design & properties prediction AI/ML as research product Generative AI design tool for consumers Generic research tasks Large language models for coding Large language models for literature review Benefits of artificial intelligence or machine learning in research Reduction in accuracy/cost/speed trade - off in research, especially computer modeling Reduced computation time Replacing experimentation Reducing need for computationally intensive, physics - based models Saving research labor Exploring larger design spaces Address of previously unsolvable problems Model poorly understood relationships between variables Identify human - unidentifiable patterns or phenomena Downsides of artificial intelligence or machine learning in research Accuracy weaknesses Predict poorly outside regions of dense, high - quality training data Interpretability weaknesses Bounds of accuracy can be unclear Accuracy assessment can be difficult Long - run scientific progress concerns AI/ML cannot develop novel scientific theory AI/ML may bypass opportunities to identify empirical or theoretical novelties Resource issues Data acquisition and cleaning is time - intensive AI/ML models are computation - and energy - intensive to develop Inappropriate use issues Easy to over - trust May be inappropriately used to address problems soluble with simpler methods 8 Second, AI/ML models can be trained on input and output data for phenomena (e.g., complex production processes) which lack robust theoretical models, developing novel predictive capabilities in the absence of explicit, human - designed theory. This is somet imes referred to as "phenomenological modeling," as it attempts to model phenomena in the absence of mechanistic, explanatory understanding: [T]he first reason we choose to use AI is because we don't have a good model of what our system is. . . I get a bunch of data coming in and I have a bunch of sensor readings, you know. . . And I use the AI to map the bunch of sensor readings to the process health or process status or machine status that I have.


Labels Matter More Than Models: Quantifying the Benefit of Supervised Time Series Anomaly Detection

Zhong, Zhijie, Yu, Zhiwen, Yang, Kaixiang, Chen, C. L. Philip

arXiv.org Artificial Intelligence

Abstract--Time series anomaly detection (TSAD) is a critical data mining task often constrained by label scarcity. Consequently, current research predominantly focuses on Unsupervised Time-series Anomaly Detection (UT AD), relying on complex architectures to model normal data distributions. However, this approach often overlooks the significant performance gains available from limited anomaly labels achievable in practical scenarios. This paper challenges the premise that architectural complexity is the optimal path for TSAD. We conduct the first methodical comparison between supervised and unsupervised paradigms and introduce STAND, a streamlined supervised baseline. Extensive experiments on five public datasets demonstrate that: (1) Labels matter more than models: under a limited labeling budget, simple supervised models significantly outperform complex state-of-the-art unsupervised methods; (2) Supervision yields higher returns: the performance gain from minimal supervision far exceeds that from architectural innovations; and (3) Practicality: STAND exhibits superior prediction consistency and anomaly localization compared to unsupervised counterparts. These findings advocate for a data-centric shift in TSAD research, emphasizing label utilization over purely algorithmic complexity. The code is publicly available at https://github.com/EmorZz1G/ST IME series anomaly detection (TSAD) is a crucial and challenging task in time series data mining, with broad applications in fields such as industrial system monitoring, cybersecurity, and health surveillance [1, 2, 3, 4]. Due to the scarcity of anomaly samples and the high cost of labeling in TSAD, unsupervised time series anomaly detection (UT AD) methods have garnered significant attention in recent years [5, 3, 6, 7]. Typically, unsupervised methods assume that the training time series data primarily consists of normal samples.


SDA: Steering-Driven Distribution Alignment for Open LLMs without Fine-Tuning

Xia, Wei, Deng, Zhi-Hong

arXiv.org Artificial Intelligence

With the rapid advancement of large language models (LLMs), their deployment in real-world applications has become increasingly widespread. LLMs are expected to deliver robust performance across diverse tasks, user preferences, and practical scenarios. However, as demands grow, ensuring that LLMs produce responses aligned with human intent remains a foundational challenge. In particular, aligning model behavior effectively and efficiently during inference, without costly retraining or extensive supervision, is both a critical requirement and a non-trivial technical endeavor. To address the challenge, we propose SDA (Steering-Driven Distribution Alignment), a training-free and model-agnostic alignment framework designed for open-source LLMs. SDA dynamically redistributes model output probabilities based on user-defined alignment instructions, enhancing alignment between model behavior and human intents without fine-tuning. The method is lightweight, resource-efficient, and compatible with a wide range of open-source LLMs. It can function independently during inference or be integrated with training-based alignment strategies. Moreover, SDA supports personalized preference alignment, enabling flexible control over the model response behavior. Empirical results demonstrate that SDA consistently improves alignment performance across 8 open-source LLMs with varying scales and diverse origins, evaluated on three key alignment dimensions, helpfulness, harmlessness, and honesty (3H). Specifically, SDA achieves average gains of 64.4% in helpfulness, 30% in honesty and 11.5% in harmlessness across the tested models, indicating its effectiveness and generalization across diverse models and application scenarios.


Real-Time Inference for Distributed Multimodal Systems under Communication Delay Uncertainty

Croisfelt, Victor, de Souza, João Henrique Inacio, Pandey, Shashi Raj, Soret, Beatriz, Popovski, Petar

arXiv.org Artificial Intelligence

Connected cyber-physical systems perform inference based on real-time inputs from multiple data streams. Uncertain communication delays across data streams challenge the temporal flow of the inference process. State-of-the-art (SotA) non-blocking inference methods rely on a reference-modality paradigm, requiring one modality input to be fully received before processing, while depending on costly offline profiling. We propose a novel, neuro-inspired non-blocking inference paradigm that primarily employs adaptive temporal windows of integration (TWIs) to dynamically adjust to stochastic delay patterns across heterogeneous streams while relaxing the reference-modality requirement. Our communication-delay-aware framework achieves robust real-time inference with finer-grained control over the accuracy-latency tradeoff. Experiments on the audio-visual event localization (AVEL) task demonstrate superior adaptability to network dynamics compared to SotA approaches.


LEGO-SLAM: Language-Embedded Gaussian Optimization SLAM

Lee, Sibaek, Ha, Seongbo, Kang, Kyeongsu, Choi, Joonyeol, Tak, Seungjun, Yu, Hyeonwoo

arXiv.org Artificial Intelligence

Recent advances in 3D Gaussian Splatting (3DGS) have enabled Simultaneous Localization and Mapping (SLAM) systems to build photorealistic maps. However, these maps lack the open-vocabulary semantic understanding required for advanced robotic interaction. Integrating language features into SLAM remains a significant challenge, as storing high-dimensional features demands excessive memory and rendering overhead, while existing methods with static models lack adaptability for novel environments. To address these limitations, we propose LEGO-SLAM (Language-Embedded Gaussian Optimization SLAM), the first framework to achieve real-time, open-vocabulary mapping within a 3DGS-based SLAM system. At the core of our method is a scene-adaptive encoder-decoder that distills high-dimensional language embeddings into a compact 16-dimensional feature space. This design reduces the memory per Gaussian and accelerates rendering, enabling real-time performance. Unlike static approaches, our encoder adapts online to unseen scenes. These compact features also enable a language-guided pruning strategy that identifies semantic redundancy, reducing the map's Gaussian count by over 60\% while maintaining rendering quality. Furthermore, we introduce a language-based loop detection approach that reuses these mapping features, eliminating the need for a separate detection model. Extensive experiments demonstrate that LEGO-SLAM achieves competitive mapping quality and tracking accuracy, all while providing open-vocabulary capabilities at 15 FPS.


EEG Emotion Recognition Through Deep Learning

Dolgopolyi, Roman, Chatzipanagiotou, Antonis

arXiv.org Artificial Intelligence

An advanced emotion classification model was developed using a CNN-Transformer architecture for emotion recognition from EEG brain wave signals, effectively distinguishing among three emotional states, positive, neutral and negative. The model achieved a testing accuracy of 91%, outperforming traditional models such as SVM, DNN, and Logistic Regression. Training was conducted on a custom dataset created by merging data from SEED, SEED-FRA, and SEED-GER repositories, comprising 1,455 samples with EEG recordings labeled according to emotional states. The combined dataset represents one of the largest and most culturally diverse collections available. Additionally, the model allows for the reduction of the requirements of the EEG apparatus, by leveraging only 5 electrodes of the 62. This reduction demonstrates the feasibility of deploying a more affordable consumer-grade EEG headset, thereby enabling accessible, at-home use, while also requiring less computational power. This advancement sets the groundwork for future exploration into mood changes induced by media content consumption, an area that remains underresearched. Integration into medical, wellness, and home-health platforms could enable continuous, passive emotional monitoring, particularly beneficial in clinical or caregiving settings where traditional behavioral cues, such as facial expressions or vocal tone, are diminished, restricted, or difficult to interpret, thus potentially transforming mental health diagnostics and interventions...


Towards a Safer and Sustainable Manufacturing Process: Material classification in Laser Cutting Using Deep Learning

Salem, Mohamed Abdallah, Ashur, Hamdy Ahmed, Elshinnawy, Ahmed

arXiv.org Artificial Intelligence

Laser cutting is a widely adopted technology in material processing across various industries, but it generates a significant amount of dust, smoke, and aerosols during operation, posing a risk to both the environment and workers' health. Speckle sensing has emerged as a promising method to monitor the cutting process and identify material types in real-time. This paper proposes a material classification technique using a speckle pattern of the material's surface based on deep learning to monitor and control the laser cutting process. The proposed method involves training a convolutional neural network (CNN) on a dataset of laser speckle patterns to recognize distinct material types for safe and efficient cutting. Previous methods for material classification using speckle sensing may face issues when the color of the laser used to produce the speckle pattern is changed. Experiments conducted in this study demonstrate that the proposed method achieves high accuracy in material classification, even when the laser color is changed. The model achieved an accuracy of 98.30 % on the training set and 96.88% on the validation set. Furthermore, the model was evaluated on a set of 3000 new images for 30 different materials, achieving an F1-score of 0.9643. The proposed method provides a robust and accurate solution for material-aware laser cutting using speckle sensing.