Goto

Collaborating Authors

 increment


The Exponentially Weighted Signature

Bloch, Alexandre, Cohen, Samuel N., Lyons, Terry, Mouterde, Joël, Walker, Benjamin

arXiv.org Machine Learning

The signature is a canonical representation of a multidimensional path over an interval. However, it treats all historical information uniformly, offering no intrinsic mechanism for contextualising the relevance of the past. To address this, we introduce the Exponentially Weighted Signature (EWS), generalising the Exponentially Fading Memory (EFM) signature from diagonal to general bounded linear operators. These operators enable cross-channel coupling at the level of temporal weighting together with richer memory dynamics including oscillatory, growth, and regime-dependent behaviour, while preserving the algebraic strengths of the classical signature. We show that the EWS is the unique solution to a linear controlled differential equation on the tensor algebra, and that it generalises both state-space models and the Laplace and Fourier transforms of the path. The group-like structure of the EWS enables efficient computation and makes the framework amenable to gradient-based learning, with the full semigroup action parametrised by and learned through its generator. We use this framework to empirically demonstrate the expressivity gap between the EWS and both the signature and EFM on two SDE-based regression tasks.




Fast Estimation of Causal Interactions using Wold Processes

Flavio Figueiredo, Guilherme Resende Borges, Pedro O.S. Vaz de Melo, Renato Assunção

Neural Information Processing Systems

Recently, several fields used networked point processes to understand complex systems such as spiking biological neurons [36],social networks[8,42]geo-sensor networks[22],financial agents inmarkets[37],television records [48]and patient visits [11]. One ofthemain objectivesinthese analyses istouncoverthe causal relationships among the entities ofthe system, ortheinteraction structure among the nodes, which is also called thelatent network structure.



Few-ShotContinualActiveLearningbyaRobot

Neural Information Processing Systems

The framework also uses uncertainty measures on the Gaussian representations of thepreviously learned classes tofindthemost informativesamples tobelabeled in an increment. We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task.





Prediction Markets as Bayesian Inverse Problems: Uncertainty Quantification, Identifiability, and Information Gain from Price-Volume Histories under Latent Types

Madrigal-Cianci, Juan Pablo, Maya, Camilo Monsalve, Breakey, Lachlan

arXiv.org Machine Learning

Prediction markets are often described as mechanisms that ``aggregate information'' into prices, yet the mapping from dispersed private information to observed market histories is typically noisy, endogenous, and shaped by heterogeneous and strategic participation. This paper formulates prediction markets as Bayesian inverse problems in which the unknown event outcome \(Y\in\{0,1\}\) is inferred from an observed history of market-implied probabilities and traded volumes. We introduce a mechanism-agnostic observation model in log-odds space in which price increments conditional on volume arise from a latent mixture of trader types. The resulting likelihood class encompasses informed and uninformed trading, heavy-tailed microstructure noise, and adversarial or manipulative flow, while requiring only price and volume as observables. Within this framework we define posterior uncertainty quantification for \(Y\), provide identifiability and well-posedness criteria in terms of Kullback--Leibler separation between outcome-conditional increment laws, and derive posterior concentration statements and finite-sample error bounds under general regularity assumptions. We further study stability of posterior odds to perturbations of the observed price--volume path and define realized and expected information gain via the posterior-vs-prior KL divergence and mutual information. The inverse-problem formulation yields explicit diagnostics for regimes in which market histories are informative and stable versus regimes in which inference is ill-posed due to type-composition confounding or outcome--nuisance symmetries. Extensive experiments on synthetic data validate our theoretical predictions regarding posterior concentration rates and identifiability thresholds.