Goto

Collaborating Authors

 equality hold






Lassoed Forests: Random Forests with Adaptive Lasso Post-selection

Shang, Jing, Bannon, James, Haibe-Kains, Benjamin, Tibshirani, Robert

arXiv.org Machine Learning

Tree-based methods are a family of non-parametric approaches in supervised learning. Random forests use a form of bootstrap aggregation, or bagging, to combine a large collection of trees and produce a final prediction. In regression problems, it gives the same weight to each tree and computes the average out-of-bag prediction. In classification problems, it assigns class labels by majority vote. However, since a single-tree model is known to have high variance, a large number of trees need to be trained and aggregated in order to reduce variance (Hastie et al. 2009). This can lead to redundant trees, as the bootstrap procedure may select similar sets of samples to train different trees. Moreover, increasing the number of trees does not reduce the bias. Post-selection boosting random forests, proposed by Wang & Wang (2021), is an attempt to reduce bias by applying Lasso regression (Tibshirani 1996) on the predictions from each individual tree. The method returns a sparser forest with fewer trees, as well as different weights assigned to each individual tree.



Quantum Fisher information matrices from Rényi relative entropies

Wilde, Mark M.

arXiv.org Artificial Intelligence

Quantum generalizations of the Fisher information are important in quantum information science, with applications in high energy and condensed matter physics and in quantum estimation theory, machine learning, and optimization. One can derive a quantum generalization of the Fisher information matrix in a natural way as the Hessian matrix arising in a Taylor expansion of a smooth divergence. Such an approach is appealing for quantum information theorists, given the ubiquity of divergences in quantum information theory. In contrast to the classical case, there is not a unique quantum generalization of the Fisher information matrix, similar to how there is not a unique quantum generalization of the relative entropy or the Rényi relative entropy. In this paper, I derive information matrices arising from the log-Euclidean, $α$-$z$, and geometric Rényi relative entropies, with the main technical tool for doing so being the method of divided differences for calculating matrix derivatives. Interestingly, for all non-negative values of the Rényi parameter $α$, the log-Euclidean Rényi relative entropy leads to the Kubo-Mori information matrix, and the geometric Rényi relative entropy leads to the right-logarithmic derivative Fisher information matrix. Thus, the resulting information matrices obey the data-processing inequality for all non-negative values of the Rényi parameter $α$ even though the original quantities do not. Additionally, I derive and establish basic properties of $α$-$z$ information matrices resulting from the $α$-$z$ Rényi relative entropies. For parameterized thermal states and time-evolved states, I establish formulas for their $α$-$z$ information matrices and hybrid quantum-classical algorithms for estimating them, with applications in quantum Boltzmann machine learning.




Consistent causal discovery with equal error variances: a least-squares perspective

Chaudhuri, Anamitra, Ni, Yang, Bhattacharya, Anirban

arXiv.org Machine Learning

We consider the problem of recovering the true causal structure among a set of variables, generated by a linear acyclic structural equation model (SEM) with the error terms being independent and having equal variances. It is well-known that the true underlying directed acyclic graph (DAG) encoding the causal structure is uniquely identifiable under this assumption. In this work, we establish that the sum of minimum expected squared errors for every variable, while predicted by the best linear combination of its parent variables, is minimised if and only if the causal structure is represented by any supergraph of the true DAG. This property is further utilised to design a Bayesian DAG selection method that recovers the true graph consistently.