rlf
Learning-based Radio Link Failure Prediction Based on Measurement Dataset in Railway Environments
Chou, Po-Heng, Lin, Da-Chih, Wei, Hung-Yu, Saad, Walid, Tsao, Yu
In this paper, a measurement-driven framework is proposed for early radio link failure (RLF) prediction in 5G non-standalone (NSA) railway environments. Using 10 Hz metro-train traces with serving and neighbor-cell indicators, we benchmark six models, namely CNN, LSTM, XGBoost, Anomaly Transformer, PatchTST, and TimesNet, under varied observation windows and prediction horizons. When the observation window is three seconds, TimesNet attains the highest F1 score with a three-second prediction horizon, while CNN provides a favorable accuracy-latency tradeoff with a two-second horizon, enabling proactive actions such as redundancy and adaptive handovers. The results indicate that deep temporal models can anticipate reliability degradations several seconds in advance using lightweight features available on commercial devices, offering a practical path to early-warning control in 5G-based railway systems.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Taiwan > Taiwan Province > Taipei (0.05)
- North America > Canada > Quebec > Montreal (0.05)
- (4 more...)
Degree-Based Logical Adjacency Checking (DBLAC): A Novel Heuristic for Vertex Coloring
Degree Based Logical Adjacency Checking (DBLAC). An efficient coloring of graphs with unique logical AND operations. The logical AND operation shows more effective color assignment and fewer number of induced colors in the case of common edges between vertices. In this work, we provide a detailed theoretical analysis of DBLAC's time and space complexity. It furthermore shows its effectiveness through prolonged experiments on standard benchmark graphs. We compare it with existing algorithms, namely DSATUR and Recursive Largest First (RLF). Second, we show how DBLAC achieves competitive results with respect to both the number of colors used and runtime performance.
Variance-Reducing Couplings for Random Features: Perspectives from Optimal Transport
Reid, Isaac, Markou, Stratis, Choromanski, Krzysztof, Turner, Richard E., Weller, Adrian
Random features (RFs) are a popular technique to scale up kernel methods in machine learning, replacing exact kernel evaluations with stochastic Monte Carlo estimates. They underpin models as diverse as efficient transformers (by approximating attention) to sparse spectrum Gaussian processes (by approximating the covariance function). Efficiency can be further improved by speeding up the convergence of these estimates: a variance reduction problem. We tackle this through the unifying framework of optimal transport, using theoretical insights and numerical algorithms to develop novel, high-performing RF couplings for kernels defined on Euclidean and discrete input spaces. They enjoy concrete theoretical performance guarantees and sometimes provide strong empirical downstream gains, including for scalable approximate inference on graphs. We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > District of Columbia > Washington (0.04)
- (10 more...)
Riemann-Lebesgue Forest for Regression
We propose a novel ensemble method called Riemann-Lebesgue Forest (RLF) for regression. The core idea of RLF is to mimic the way how a measurable function can be approximated by partitioning its range into a few intervals. With this idea in mind, we develop a new tree learner named Riemann-Lebesgue Tree which has a chance to split the node from response $Y$ or a direction in feature space $\mathbf{X}$ at each non-terminal node. We generalize the asymptotic performance of RLF under different parameter settings mainly through Hoeffding decomposition \cite{Vaart} and Stein's method \cite{Chen2010NormalAB}. When the underlying function $Y=f(\mathbf{X})$ follows an additive regression model, RLF is consistent with the argument from \cite{Scornet2014ConsistencyOR}. The competitive performance of RLF against original random forest \cite{Breiman2001RandomF} is demonstrated by experiments in simulation data and real world datasets.
- North America > United States > Pennsylvania > Northampton County > Bethlehem (0.04)
- South America > Brazil (0.04)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Mind the Gap: Norm-Aware Adaptive Robust Loss for Multivariate Least-Squares Problems
Hitchcox, Thomas, Forbes, James Richard
Measurement outliers are unavoidable when solving real-world robot state estimation problems. A large family of robust loss functions (RLFs) exists to mitigate the effects of outliers, including newly developed adaptive methods that do not require parameter tuning. All of these methods assume that residuals follow a zero-mean Gaussian-like distribution. However, in multivariate problems the residual is often defined as a norm, and norms follow a Chi-like distribution with a non-zero mode value. This produces a "mode gap" that impacts the convergence rate and accuracy of existing RLFs. The proposed approach, "Adaptive MB," accounts for this gap by first estimating the mode of the residuals using an adaptive Chi-like distribution. Applying an existing adaptive weighting scheme only to residuals greater than the mode leads to more robust performance and faster convergence times in two fundamental state estimation problems, point cloud alignment and pose averaging.
- North America > Canada > Quebec > Montreal (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Information Technology > Artificial Intelligence > Robots (0.70)
- Information Technology > Data Science > Data Mining (0.46)