Goto

Collaborating Authors

 tms


Improving Internet Traffic Matrix Prediction via Time Series Clustering

Cash, Martha, Wyglinski, Alexander

arXiv.org Artificial Intelligence

We present a novel framework that leverages time series clustering to improve internet traffic matrix (TM) prediction using deep learning (DL) models. Traffic flows within a TM often exhibit diverse temporal behaviors, which can hinder prediction accuracy when training a single model across all flows. To address this, we propose two clustering strategies, source clustering and histogram clustering, that group flows with similar temporal patterns prior to model training. Clustering creates more homogeneous data subsets, enabling models to capture underlying patterns more effectively and generalize better than global prediction approaches that fit a single model to the entire TM. Compared to existing TM prediction methods, our method reduces RMSE by up to 92\% for Abilene and 75\% for GÉANT. In routing scenarios, our clustered predictions also reduce maximum link utilization (MLU) bias by 18\% and 21\%, respectively, demonstrating the practical benefits of clustering when TMs are used for network optimization.


GPLight+: A Genetic Programming Method for Learning Symmetric Traffic Signal Control Policy

Liao, Xiao-Cheng, Mei, Yi, Zhang, Mengjie

arXiv.org Artificial Intelligence

--Recently, learning-based approaches, have achieved significant success in automatically devising effective traffic signal control strategies. In particular, as a powerful evolutionary machine learning approach, Genetic Programming (GP) is utilized to evolve human-understandable phase urgency functions to measure the urgency of activating a green light for a specific phase. However, current GP-based methods are unable to treat the common traffic features of different traffic signal phases consistently. T o address this issue, we propose to use a symmetric phase urgency function to calculate the phase urgency for a specific phase based on the current road conditions. This is represented as an aggregation of two shared subtrees, each representing the urgency of a turn movement in the phase. We then propose a GP method to evolve the symmetric phase urgency function. We evaluate our proposed method on the well-known cityflow traffic simulator, based on multiple public real-world datasets. The experimental results show that the proposed symmetric urgency function representation can significantly improve the performance of the learned traffic signal control policies over the traditional GP representation on a wide range of scenarios. Further analysis shows that the proposed method can evolve effective, human-understandable and easily deployable traffic signal control policies. RAFFIC signals, located at signalized intersections, manage traffic flow in various directions, thereby significantly contributing to the improvement of both transportation efficiency and road safety [1]. Poorly designed traffic signal plans result in commuters wasting valuable time on the roads. The majority of existing traffic signal control systems do not operate based on decisions tailored to the dynamic traffic conditions. For instance, the Sydney Coordinated Adaptive Traffic System [2], which relies on a predetermined cycle time plan, remains extensively utilized in real signalized intersections worldwide. The emergence of Deep Reinforcement Learning (DRL) as a solution to the Traffic Signal Control (TSC) problem is driven by advancements in deep learning [3] and the increasing accessibility of transportation infrastructure components such as surveillance cameras, road sensors, and the internet of vehicles [4]. This trend is exemplified by recent research efforts [5]-[7].


A Comparative Study of Feature Selection in Tsetlin Machines

Halenka, Vojtech, Granmo, Ole-Christoffer, Jiao, Lei, Andersen, Per-Arne

arXiv.org Artificial Intelligence

Feature Selection (FS) is crucial for improving model interpretability, reducing complexity, and sometimes for enhancing accuracy. The recently introduced Tsetlin machine (TM) offers interpretable clause-based learning, but lacks established tools for estimating feature importance. In this paper, we adapt and evaluate a range of FS techniques for TMs, including classical filter and embedded methods as well as post-hoc explanation methods originally developed for neural networks (e.g., SHAP and LIME) and a novel family of embedded scorers derived from TM clause weights and Tsetlin automaton (TA) states. We benchmark all methods across 12 datasets, using evaluation protocols, like Remove and Retrain (ROAR) strategy and Remove and Debias (ROAD), to assess causal impact. Our results show that TM-internal scorers not only perform competitively but also exploit the interpretability of clauses to reveal interacting feature patterns. Simpler TM-specific scorers achieve similar accuracy retention at a fraction of the computational cost. This study establishes the first comprehensive baseline for FS in TM and paves the way for developing specialized TM-specific interpretability techniques.


Transductive Model Selection under Prior Probability Shift

Volpi, Lorenzo, Moreo, Alejandro, Sebastiani, Fabrizio

arXiv.org Artificial Intelligence

Transductive learning is a supervised machine learning task in which, unlike in traditional inductive learning, the unlabelled data that require labelling are a finite set and are available at training time. Similarly to inductive learning contexts, transductive learning contexts may be affected by dataset shift, i.e., may be such that the IID assumption does not hold. We here propose a method, tailored to transductive classification contexts, for performing model selection (i.e., hyperparameter optimisation) when the data exhibit prior probability shift, an important type of dataset shift typical of anti-causal learning problems. In our proposed method the hyperparameters can be optimised directly on the unlabelled data to which the trained classifier must be applied; this is unlike traditional model selection methods, that are based on performing cross-validation on the labelled training data. We provide experimental results that show the benefits brought about by our method.


MoNetV2: Enhanced Motion Network for Freehand 3D Ultrasound Reconstruction

Luo, Mingyuan, Yang, Xin, Yan, Zhongnuo, Cao, Yan, Zhang, Yuanji, Hu, Xindi, Wang, Jin, Ding, Haoxuan, Han, Wei, Sun, Litao, Ni, Dong

arXiv.org Artificial Intelligence

Abstract--Three-dimensional (3D) ultrasound (US) aims to provide sonographers with the spatial relationships of ana tomical structures, playing a crucial role in clinical diagnosis. R ecently, deep-learning-based freehand 3D US has made significant advancements. However, i mage-only reconstruction poses difficulties in reducing cumulat ive drift and further improving reconstruction accuracy, particula rly in scenarios involving complex motion trajectories. In this c ontext, we propose an enhanced motion network (MoNetV2) to enhance the accuracy and generalizability of reconstruction under diverse scanning velocities and tactics. First, we propose a sensor -based temporal and multi-branch structure that fuses image and mo tion information from a velocity perspective to improve image-o nly reconstruction accuracy. Second, we devise an online multi -level consistency constraint that exploits the inherent consist ency of scans to handle various scanning velocities and tactics. Th is constraint exploits both scan-level velocity consistency, path-level appearance consistency, and patch-level motion consisten cy to supervise inter-frame transformation estimation. Third, we distill an online multi-modal self-supervised strategy that lever ages the correlation between network estimation and motion informa tion to further reduce cumulative errors. Extensive experiment s clearly demonstrate that MoNetV2 surpasses existing metho ds in both reconstruction quality and generalizability perfo rmance across three large datasets. L TRASOUND (US) imaging plays an important role in clinical monitoring and diagnosis because of its non-invasiveness, real-time, and mobility [ 1 ]. This work was supported by the National Natural Science Foun dation of China (Nos. Jin Wang and Litao Sun are with the Cancer C enter, Department of Ultrasound Medicine, Zhejiang Provincial Pe ople's Hospital, Affiliated People's Hospital of Hangzhou Medical Colle ge, Hangzhou, Zhejiang, China. Wei Han is with the Department of Health Man agement Center, Qilu Hospital, Cheeloo College of Medicine, Shando ng University, Jinan, Shandong, China. Its applications span vari ous fields such as heart [ 2 ], fetus [ 3 ], breast [ 4 ], and liver [ 5 ]. Traditional 3D US imaging methods encompass mechanical, phased array, and freehand techniques. Mechanical and phas ed array imaging often suffer from specialized and expensive hardware with a limited field of view.


Enhancing Trust Management System for Connected Autonomous Vehicles Using Machine Learning Methods: A Survey

Xu, Qian, Zhang, Lei, Liu, Yixiao

arXiv.org Artificial Intelligence

Connected Autonomous Vehicles (CAVs) operate in dynamic, open, and multi-domain networks, rendering them vulnerable to various threats. Trust Management Systems (TMS) systematically organize essential steps in the trust mechanism, identifying malicious nodes against internal threats and external threats, as well as ensuring reliable decision-making for more cooperative tasks. Recent advances in machine learning (ML) offer significant potential to enhance TMS, especially for the strict requirements of CAVs, such as CAV nodes moving at varying speeds, and opportunistic and intermittent network behavior. Those features distinguish ML-based TMS from social networks, static IoT, and Social IoT. This survey proposes a novel three-layer ML-based TMS framework for CAVs in the vehicle-road-cloud integration system, i.e., trust data layer, trust calculation layer and trust incentive layer. A six-dimensional taxonomy of objectives is proposed. Furthermore, the principles of ML methods for each module in each layer are analyzed. Then, recent studies are categorized based on traffic scenarios that are against the proposed objectives. Finally, future directions are suggested, addressing the open issues and meeting the research trend. We maintain an active repository that contains up-to-date literature and open-source projects at https://github.com/octoberzzzzz/ML-based-TMS-CAV-Survey.


Towards a Probabilistic Framework for Analyzing and Improving LLM-Enabled Software

Baldonado, Juan Manuel, Bonomo-Braberman, Flavia, Braberman, Víctor Adrián

arXiv.org Artificial Intelligence

Ensuring the reliability and verifiability of large language model (LLM)-enabled systems remains a significant challenge in software engineering. We propose a probabilistic framework for systematically analyzing and improving these systems by modeling and refining distributions over clusters of semantically equivalent outputs. This framework facilitates the evaluation and iterative improvement of Transference Models -- key software components that utilize LLMs to transform inputs into outputs for downstream tasks. To illustrate its utility, we apply the framework to the autoformalization problem, where natural language documentation is transformed into formal program specifications. Our case illustrates how probabilistic analysis enables the identification of weaknesses and guides focused alignment improvements, resulting in more reliable and interpretable outputs. This principled approach offers a foundation for addressing critical challenges in the development of robust LLM-enabled systems.


Traffic Matrix Estimation based on Denoising Diffusion Probabilistic Model

Yuan, Xinyu, Qiao, Yan, Zhao, Pei, Hu, Rongyao, Zhang, Benchu

arXiv.org Artificial Intelligence

The traffic matrix estimation (TME) problem has been widely researched for decades of years. Recent progresses in deep generative models offer new opportunities to tackle TME problems in a more advanced way. In this paper, we leverage the powerful ability of denoising diffusion probabilistic models (DDPMs) on distribution learning, and for the first time adopt DDPM to address the TME problem. To ensure a good performance of DDPM on learning the distributions of TMs, we design a preprocessing module to reduce the dimensions of TMs while keeping the data variety of each OD flow. To improve the estimation accuracy, we parameterize the noise factors in DDPM and transform the TME problem into a gradient-descent optimization problem. Finally, we compared our method with the state-of-the-art TME methods using two real-world TM datasets, the experimental results strongly demonstrate the superiority of our method on both TM synthesis and TM estimation.


Desperate parents turn to magnetic therapy to help kids with autism. They have little evidence to go on

Los Angeles Times

Thomas VanCott compares his son Jake's experience with autism to life on a tightrope. Upset the delicate balance and Jake, 18, plunges into frustration, slapping himself and twisting his neck in seemingly painful ways. Like many families with children on the autism spectrum, Jake's parents sought treatments beyond traditional speech and behavioral therapies. One that seemed promising was magnetic e-resonance therapy, or MERT, a magnetic brain stimulation therapy trademarked in 2016 by a Newport Beach-based company called Wave Neuroscience. The company licensed MERT to private clinics across the country that offered it as a therapy for conditions including depression, PTSD and autism. Those clinics described MERT as a noninvasive innovation that could improve an autistic child's sleep, social skills and -- most attractive to the VanCott family -- speech. It was expensive -- 9,000 -- and not covered by insurance.


Transformers As Approximations of Solomonoff Induction

Young, Nathan, Witbrock, Michael

arXiv.org Artificial Intelligence

Solomonoff Induction is an optimal-in-the-limit unbounded algorithm for sequence prediction, representing a Bayesian mixture of every computable probability distribution and performing close to optimally in predicting any computable sequence. Being an optimal form of computational sequence prediction, it seems plausible that it may be used as a model against which other methods of sequence prediction might be compared. We put forth and explore the hypothesis that Transformer models - the basis of Large Language Models - approximate Solomonoff Induction better than any other extant sequence prediction method. We explore evidence for and against this hypothesis, give alternate hypotheses that take this evidence into account, and outline next steps for modelling Transformers and other kinds of AI in this way.