Goto

Collaborating Authors

 probability simplex




Simplex-Optimized Hybrid Ensemble for Large Language Model Text Detection Under Generative Distribution Drif

Kristanto, Sepyan Purnama, Hakim, Lutfi, Yusuf, Dianni

arXiv.org Artificial Intelligence

Abstract--The widespread adoption of large language models (LLMs) has made it difficult to distinguish human writing from machine-produced text in many real applications. Detectors that were effective for one generation of models tend to degrade when newer models or modified decoding strategies are introduced. In this work, we study this lack of stability and propose a hybrid ensemble that is explicitly designed to cope with changing generator distributions. The ensemble combines three complementary components: a RoBERT a-based classifier fine-tuned for supervised detection, a curvature-inspired score based on perturbing the input and measuring changes in model likelihood, and a compact stylometric model built on handcrafted linguistic features. The outputs of these components are fused on the probability simplex, and the weights are chosen via validation-based search. We frame this approach in terms of variance reduction and risk under mixtures of generators, and show that the simplex constraint provides a simple way to trade off the strengths and weaknesses of each branch. Experiments on a 30 000-document corpus drawn from several LLM families including models unseen during training and paraphrased attack variants show that the proposed method achieves 94.2% accuracy and an AUC of 0.978. The ensemble also lowers false positives on scientific articles compared to strong baselines, which is critical in educational and research settings where wrongly flagging human work is costly. Text generated by large language models (LLMs) is now routinely used in homework, reports, programming, and informal communication.


Large-Scale Stochastic Sampling from the Probability Simplex

Neural Information Processing Systems

Stochastic gradient Markov chain Monte Carlo (SGMCMC) has become a popular method for scalable Bayesian inference. These methods are based on sampling a discrete-time approximation to a continuous time process, such as the Langevin diffusion. When applied to distributions defined on a constrained space the time-discretization error can dominate when we are near the boundary of the space. We demonstrate that because of this, current SGMCMC methods for the simplex struggle with sparse simplex spaces; when many of the components are close to zero. Unfortunately, many popular large-scale Bayesian models, such as network or topic models, require inference on sparse simplex spaces. To avoid the biases caused by this discretization error, we propose the stochastic Cox-Ingersoll-Ross process (SCIR), which removes all discretization error and we prove that samples from the SCIR process are asymptotically unbiased. We discuss how this idea can be extended to target other constrained spaces. Use of the SCIR process within a SGMCMC algorithm is shown to give substantially better performance for a topic model and a Dirichlet process mixture model than existing SGMCMC approaches.


Temporal Robustness in Discrete Time Linear Dynamical Systems

Metya, Nilava, Shah, Ankit, Sinha, Arunesh

arXiv.org Artificial Intelligence

Discrete time linear dynamical systems, including Markov chains, have found many applications including in security settings such as in cybersecurity operations center (CSOC) management and in managing health risks. However, in these two scenarios, there is uncertainty about the time horizon for which the system runs. This creates uncertainty about the cost (or reward) incurred based on the state distribution when the system stops. Given past data samples of how long a system ran, we theoretically analyze the cost incurred at the stop of the system as a distributional robust cost estimation task in a Wasserstein ambiguity set. Towards this, we show an equivalence between a discrete time Markov Chain on a probability simplex and a global asymptotic stable (GAS) discrete time linear dynamical system, allowing us to base our study on a GAS system only. Then, we provide various polynomial time algorithms and hardness results for different cases in our theoretical study, including a novel proof of a fundamental result about Wassertein distance based polytope. We experiment with real world data in CSOC domain and prior data in health domain to reveal the benefits of our model and approach.



309928d4b100a5d75adff48a9bfc1ddb-Reviews.html

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper extends the stochastic gradient langevin dynamics by using the Reimannian structure and applies it into probability simplex. The idea appears to be quite interesting. But there are several confusing parts that I don't quite get. Maybe the authors can elaborate those a bit.



Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. This paper shows efficient minimax strategies when the loss function is the squared Mahalanobis distance, for two different settings: 1) action and outcome spaces are the probability simplex, and 2) they are the Euclidean ball. The paper is clearly written with a consistent set of notations. The games considered appear similar to previous work, such as TW00 and ABRT08. The only other direction of proposed novelty is in the choice of the Mahalanobis distance, a generalization of squared euclidean distance.


Stochastic Gradient Riemannian Langevin Dynamics on the Probability Simplex

Neural Information Processing Systems

In this paper we investigate the use of Langevin Monte Carlo methods on the probability simplex and propose a new method, Stochastic gradient Riemannian Langevin dynamics, which is simple to implement and can be applied online. We apply this method to latent Dirichlet allocation in an online setting, and demonstrate that it achieves substantial performance improvements to the state of the art online variational Bayesian methods.