Goto

Collaborating Authors

 glrt


0fd489e5e393f61b355be86ed4c24a54-Paper-Conference.pdf

Neural Information Processing Systems

When solving real-world problems, where contexts and actions are complex and high-dimensional (e.g., users' social graph, items' visual description), it is crucial to provide the bandit algorithm with a suitable representation of the context-action space.


Score Combining for Contrastive OOD Detection

Reehorst, Edward T., Schniter, Philip

arXiv.org Artificial Intelligence

In out-of-distribution (OOD) detection, one is asked to classify whether a test sample comes from a known inlier distribution or not. We focus on the case where the inlier distribution is defined by a training dataset and there exists no additional knowledge about the novelties that one is likely to encounter. This problem is also referred to as novelty detection, one-class classification, and unsupervised anomaly detection. The current literature suggests that contrastive learning techniques are state-of-the-art for OOD detection. We aim to improve on those techniques by combining/ensembling their scores using the framework of null hypothesis testing and, in particular, a novel generalized likelihood ratio test (GLRT). We demonstrate that our proposed GLRT-based technique outperforms the state-of-the-art CSI and SupCSI techniques from Tack et al. 2020 in dataset-vs-dataset experiments with CIFAR-10, SVHN, LSUN, ImageNet, and CIFAR-100, as well as leave-one-class-out experiments with CIFAR-10. We also demonstrate that our GLRT outperforms the score-combining methods of Fisher, Bonferroni, Simes, Benjamini-Hochwald, and Stouffer in our application.


Testing Dependency of Weighted Random Graphs

Oren, Mor, Paslev, Vered, Huleihel, Wasim

arXiv.org Artificial Intelligence

Consider the following decision problem. We observe two weighted random graphs that are either generated independently at random or are edge-dependent due to some latent vertex correspondence or permutation. For this basic problem, two natural questions arise: the detection problem, which concerns whether the graphs exhibit dependence, and the recovery problem, which concerns identifying the latent correspondence between vertices. Here, we address the former question, specifically, we aim to understand under what conditions, in terms of the number of vertices and the generative distributions, one can distinguish between the two hypotheses and detect whether these graphs are dependent or not, say, with high probability? The fundamental question above was first introduced and analyzed in [1], where for Gaussian-weighted and dense Erdős-Rényi random graphs on n vertices, sharp informationtheoretic thresholds were developed, revealing the exact barrier at which the asymptotic optimal detection error probability undergoes a phase transition from zero to one as n approaches infinity. For sparse Erdős-Rényi random graphs this threshold was initially determined within a constant factor in the same paper.


One-Class Classification as GLRT for Jamming Detection in Private 5G Networks

Varotto, Matteo, Valentin, Stefan, Ardizzon, Francesco, Marzotto, Samuele, Tomasin, Stefano

arXiv.org Artificial Intelligence

5G mobile networks are vulnerable to jamming attacks that may jeopardize valuable applications such as industry automation. In this paper, we propose to analyze radio signals with a dedicated device to detect jamming attacks. We pursue a learning approach, with the detector being a CNN implementing a GLRT. To this end, the CNN is trained as a two-class classifier using two datasets: one of real legitimate signals and another generated artificially so that the resulting classifier implements the GLRT. The artificial dataset is generated mimicking different types of jamming signals. We evaluate the performance of this detector using experimental data obtained from a private 5G network and several jamming signals, showing the technique's effectiveness in detecting the attacks.


CFARnet: deep learning for target detection with constant false alarm rate

Diskin, Tzvi, Beer, Yiftach, Okun, Uri, Wiesel, Ami

arXiv.org Artificial Intelligence

We consider the problem of target detection with a constant false alarm rate (CFAR). This constraint is crucial in many practical applications and is a standard requirement in classical composite hypothesis testing. In settings where classical approaches are computationally expensive or where only data samples are given, machine learning methodologies are advantageous. CFAR is less understood in these settings. To close this gap, we introduce a framework of CFAR constrained detectors. Theoretically, we prove that a CFAR constrained Bayes optimal detector is asymptotically equivalent to the classical generalized likelihood ratio test (GLRT). Practically, we develop a deep learning framework for fitting neural networks that approximate it. Experiments of target detection in different setting demonstrate that the proposed CFARnet allows a flexible tradeoff between CFAR and accuracy.


Testing Dependency of Unlabeled Databases

Paslev, Vered, Huleihel, Wasim

arXiv.org Artificial Intelligence

In this paper, we investigate the problem of deciding whether two random databases $\mathsf{X}\in\mathcal{X}^{n\times d}$ and $\mathsf{Y}\in\mathcal{Y}^{n\times d}$ are statistically dependent or not. This is formulated as a hypothesis testing problem, where under the null hypothesis, these two databases are statistically independent, while under the alternative, there exists an unknown row permutation $\sigma$, such that $\mathsf{X}$ and $\mathsf{Y}^\sigma$, a permuted version of $\mathsf{Y}$, are statistically dependent with some known joint distribution, but have the same marginal distributions as the null. We characterize the thresholds at which optimal testing is information-theoretically impossible and possible, as a function of $n$, $d$, and some spectral properties of the generative distributions of the datasets. For example, we prove that if a certain function of the eigenvalues of the likelihood function and $d$, is below a certain threshold, as $d\to\infty$, then weak detection (performing slightly better than random guessing) is statistically impossible, no matter what the value of $n$ is. This mimics the performance of an efficient test that thresholds a centered version of the log-likelihood function of the observed matrices. We also analyze the case where $d$ is fixed, for which we derive strong (vanishing error) and weak detection lower and upper bounds.


Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees

Tirinzoni, Andrea, Papini, Matteo, Touati, Ahmed, Lazaric, Alessandro, Pirotta, Matteo

arXiv.org Artificial Intelligence

We study the problem of representation learning in stochastic contextual linear bandits. While the primary concern in this domain is usually to find realizable representations (i.e., those that allow predicting the reward function at any context-action pair exactly), it has been recently shown that representations with certain spectral properties (called HLS) may be more effective for the exploration-exploitation task, enabling LinUCB to achieve constant (i.e., horizon-independent) regret. In this paper, we propose BanditSRL, a representation learning algorithm that combines a novel constrained optimization problem to learn a realizable representation with good spectral properties with a generalized likelihood ratio test to exploit the recovered representation and avoid excessive exploration. We prove that BanditSRL can be paired with any no-regret algorithm and achieve constant regret whenever an HLS representation is available. Furthermore, BanditSRL can be easily combined with deep neural networks and we show how regularizing towards HLS representations is beneficial in standard benchmarks.


Model change detection with application to machine learning

Bu, Yuheng, Lu, Jiaxun, Veeravalli, Venugopal V.

arXiv.org Machine Learning

Throughout this paper, we use lower case letters to denote scalars and vectors, and use upper case letters to denote random variablesand matrices. We consider the model change detection problem in the following setting. ABSTRACT Model change detection is studied, in which there are two sets of samples that are independently and identically distributed (i.i.d.) according to a pre-change probabilistic model with parameter θ,and a post-change model with parameter θ The goal is to detect whether the change in the model is significant, i.e., whether the difference between the prechange parameterand the post-change parameter ‖θ θ The problem is considered in a Neyman-Pearson setting, where the goal is to maximize the probability of detection under a false alarm constraint. Since the generalized likelihood ratio test (GLRT) is difficult to compute in this problem, we construct an empirical differencetest (EDT), which approximates the GLRT and has low computational complexity. Moreover, we provide an approximation method to set the threshold of the EDT to meet the false alarm constraint.


Optimal change point detection in Gaussian processes

Keshavarz, Hossein, Scott, Clayton, Nguyen, XuanLong

arXiv.org Machine Learning

We study the problem of detecting a change in the mean of one-dimensional Gaussian process data. This problem is investigated in the setting of increasing domain (customarily employed in time series analysis) and in the setting of fixed domain (typically arising in spatial data analysis). We propose a detection method based on the generalized likelihood ratio test (GLRT), and show that our method achieves nearly asymptotically optimal rate in the minimax sense, in both settings. The salient feature of the proposed method is that it exploits in an efficient way the data dependence captured by the Gaussian process covariance structure. When the covariance is not known, we propose the plug-in GLRT method and derive conditions under which the method remains asymptotically near optimal. By contrast, the standard CUSUM method, which does not account for the covariance structure, is shown to be asymptotically optimal only in the increasing domain. Our algorithms and accompanying theory are applicable to a wide variety of covariance structures, including the Matern class, the powered exponential class, and others. The plug-in GLRT method is shown to perform well for maximum likelihood estimators with a dense covariance matrix.


Spectrum Sensing for Cognitive Radio Using Kernel-Based Learning

Hou, Shujie, Qiu, Robert C.

arXiv.org Machine Learning

Kernel method is a very powerful tool in machine learning. The trick of kernel has been effectively and extensively applied in many areas of machine learning, such as support vector machine (SVM) and kernel principal component analysis (kernel PCA). Kernel trick is to define a kernel function which relies on the inner-product of data in the feature space without knowing these feature space data. In this paper, the kernel trick will be employed to extend the algorithm of spectrum sensing with leading eigenvector under the framework of PCA to a higher dimensional feature space. Namely, the leading eigenvector of the sample covariance matrix in the feature space is used for spectrum sensing without knowing the leading eigenvector explicitly. Spectrum sensing with leading eigenvector under the framework of kernel PCA is proposed with the inner-product as a measure of similarity. A modified kernel GLRT algorithm based on matched subspace model will be the first time applied to spectrum sensing. The experimental results on simulated sinusoidal signal show that spectrum sensing with kernel PCA is about 4 dB better than PCA, besides, kernel GLRT is also better than GLRT. The proposed algorithms are also tested on the measured DTV signal. The simulation results show that kernel methods are 4 dB better than the corresponding linear methods. The leading eigenvector of the sample covariance matrix learned by kernel PCA is more stable than that learned by PCA for different segments of DTV signal.