Goto

Collaborating Authors

 probability distribution function


Neural Trees for Learning on Graphs

Neural Information Processing Systems

Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs. Despite this success, existing GNNs are constrained by their local message-passing architecture and are provably limited in their expressive power. In this work, we propose a new GNN architecture - the Neural Tree. The neural tree architecture does not perform message passing on the input graph, but on a tree-structured graph, called the H-tree, that is constructed from the input graph. Nodes in the H-tree correspond to subgraphs in the input graph, and they are reorganized in a hierarchical manner such that the parent of a node in the H-tree always corresponds to a larger subgraph in the input graph.


Conditional Diffusion-Flow models for generating 3D cosmic density fields: applications to f(R) cosmologies

Riveros, Julieth Katherine, Saavedra, Paola, Hortua, Hector J., Garcia-Farieta, Jorge Enrique, Olier, Ivan

arXiv.org Artificial Intelligence

Next-generation galaxy surveys promise unprecedented precision in testing gravity at cosmological scales. However, realising this potential requires accurately modelling the non-linear cosmic web. We address this challenge by exploring conditional generative modelling to create 3D dark matter density fields via score-based (diffusion) and flow-based methods. Our results demonstrate the power of diffusion models to accurately reproduce the matter power spectra and bispectra, even for unseen configurations. They also offer a significant speed-up with slightly reduced accuracy, when flow-based reconstructing the probability distribution function, but they struggle with higher-order statistics. To improve conditional generation, we introduce a novel multi-output model to develop feature representations of the cosmological parameters. Our findings offer a powerful tool for exploring deviations from standard gravity, combining high precision with reduced computational cost, thus paving the way for more comprehensive and efficient cosmological analyses null .


Neural Trees for Learning on Graphs

Neural Information Processing Systems

Graph Neural Networks (GNNs) have emerged as a flexible and powerful approach for learning over graphs. Despite this success, existing GNNs are constrained by their local message-passing architecture and are provably limited in their expressive power. In this work, we propose a new GNN architecture – the Neural Tree. The neural tree architecture does not perform message passing on the input graph, but on a tree-structured graph, called the H-tree, that is constructed from the input graph. Nodes in the H-tree correspond to subgraphs in the input graph, and they are reorganized in a hierarchical manner such that the parent of a node in the H-tree always corresponds to a larger subgraph in the input graph.


Probability for machine learning

#artificialintelligence

In this post, we will walk through the building blocks of probability theory and use these learnings to motivate fundamental ideas in machine learning. In the first section, we will talk about random variables and how they help quantify real world experiments. The final section will talk about how these mathematical concepts are used together to solve machine learning problems. Let's begin our journey with a fun experiment. Take a pen and paper; go outside to the main street in front of your house. Look at every person that walks passed you and take note their hair color; some approximation of their height in centimeters; and any other detail you find interesting. Do this for about 10 minutes. You conducted your first experiment! With this experiment, you can now answer some questions: How many people walked passed you?


A Quantum Algorithm for Computing All Diagnoses of a Switching Circuit

Feldman, Alexander, de Kleer, Johan, Matei, Ion

arXiv.org Artificial Intelligence

Faults are stochastic by nature while most man-made systems, and especially computers, work deterministically. This necessitates the linking of probability theory with mathematical logics, automata, and switching circuit theory. This paper provides such a connecting via quantum information theory which is an intuitive approach as quantum physics obeys probability laws. In this paper we provide a novel approach for computing diagnosis of switching circuits with gate-based quantum computers. The approach is based on the idea of putting the qubits representing faults in superposition and compute all, often exponentially many, diagnoses simultaneously. We empirically compare the quantum algorithm for diagnostics to an approach based on SAT and model-counting. For a benchmark of combinational circuits we establish an error of less than one percent in estimating the true probability of faults.


Probability Distribution Functions in Neural Networks

#artificialintelligence

"Neural networks are computing systems with interconnected nodes that work much like neurons in the human brain." Neural networks are nodes in a densely packed system that takes input numbers and outputs more numbers. If we look closely at a dense neural network, we can find neurons connected, like the image below. If we further zoom in, we can see precisely what each neuron does. For example, a neuron can be seen as a box that eats a number and throws another computer number as output.

  neural network, neuron, probability distribution function, (8 more...)

Online Aggregation of Probability Forecasts with Confidence

V'yugin, Vladimir, Trunov, Vladimir

arXiv.org Artificial Intelligence

The paper presents numerical experiments and some theoretical developments in prediction with expert advice (PEA). One experiment deals with predicting electricity consumption depending on temperature and uses real data. As the pattern of dependence can change with season and time of the day, the domain naturally admits PEA formulation with experts having different ``areas of expertise''. We consider the case where several competing methods produce online predictions in the form of probability distribution functions. The dissimilarity between a probability forecast and an outcome is measured by a loss function (scoring rule). A popular example of scoring rule for continuous outcomes is Continuous Ranked Probability Score (CRPS). In this paper the problem of combining probabilistic forecasts is considered in the PEA framework. We show that CRPS is a mixable loss function and then the time-independent upper bound for the regret of the Vovk aggregating algorithm using CRPS as a loss function can be obtained. Also, we incorporate a ``smooth'' version of the method of specialized experts in this scheme which allows us to combine the probabilistic predictions of the specialized experts with overlapping domains of their competence.


A Guide to Different Types of Noises and Image Denoising Methods

#artificialintelligence

With the increasing use of digital cameras, people come around a variety of images in their daily life. Some of the images are of good quality while a few images we encounter with which are poor in quality. This noise may be caused by low light conditions or other intensity problems. To denoise an image, i.e., to reduce the noise in an image, there are various approaches used. It has been a hot topic of research for a long time and is still under experimentation by researchers.


Mixed Moments for the Product of Ginibre Matrices

Halmagyi, Nick, Lal, Shailesh

arXiv.org Machine Learning

We study the ensemble of a product of n complex Gaussian i.i.d. matrices. We find this ensemble is Gaussian with a variance matrix which is averaged over a multi-Wishart ensemble. We compute the mixed moments and find that at large $N$, they are given by an enumeration of non-crossing pairings weighted by Fuss-Catalan numbers.


Online Learning with Continuous Ranked Probability Score

V'yugin, Vladimir, Trunov, Vladimir

arXiv.org Machine Learning

Probabilistic forecasts in the form of probability distributions over future events have become popular in several fields of statistical science. The dissimilarity between a probability forecast and an outcome is measured by a loss function (scoring rule). Popular example of scoring rule for continuous outcomes is the continuous ranked probability score (CRPS). We consider the case where several competing methods produce online predictions in the form of probability distribution functions. In this paper, the problem of combining probabilistic forecasts is considered in the prediction with expert advice framework. We show that CRPS is a mixable loss function and then the time independent upper bound for the regret of the Vovk's aggregating algorithm using CRPS as a loss function can be obtained. We present the results of numerical experiments illustrating the proposed methods.