Goto

Collaborating Authors

 Zinkov, Rob


BlackJAX: Composable Bayesian inference in JAX

arXiv.org Machine Learning

BlackJAX is a library implementing sampling and variational inference algorithms commonly used in Bayesian computation. It is designed for ease of use, speed, and modularity by taking a functional approach to the algorithms' implementation. BlackJAX is written in Python, using JAX to compile and run NumpPy-like samplers and variational methods on CPUs, GPUs, and TPUs. The library integrates well with probabilistic programming languages by working directly with the (un-normalized) target log density function. BlackJAX is intended as a collection of low-level, composable implementations of basic statistical 'atoms' that can be combined to perform well-defined Bayesian inference, but also provides high-level routines for ease of use. It is designed for users who need cutting-edge methods, researchers who want to create complex sampling methods, and people who want to learn how these work.


Simulation-Based Inference for Global Health Decisions

arXiv.org Machine Learning

This is fomenting the development of comprehensive modelling The COVID-19 pandemic has highlighted the importance and simulation to support the design of health interventions of in-silico epidemiological modelling in predicting and policies, and to guide decision-making in a variety of the dynamics of infectious diseases to inform health system domains [22, 49]. For example, simulations health policy and decision makers about suitable prevention have provided valuable insight to deal with public health and containment strategies. Work in this setting problems such as tobacco consumption in New Zealand [50], involves solving challenging inference and control and diabetes and obesity in the US [58]. They have been problems in individual-based models of ever increasing used to explore policy options such as those in maternal and complexity. Here we discuss recent breakthroughs antenatal care in Uganda [44], and applied to evaluate health in machine learning, specifically in simulation-based reform scenarios such as predicting changes in access to inference, and explore its potential as a novel venue primary care services in Portugal [21]. Their applicability for model calibration to support the design and evaluation in informing the design of cancer screening programmes of public health interventions. To further stimulate has been also discussed [42, 23]. Recently, simulations have research, we are developing software interfaces that informed the response to the COVID-19 outbreak [19].


Faithful Inversion of Generative Models for Effective Amortized Inference

Neural Information Processing Systems

Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently. Generally, they require the inversion of the dependency structure in the generative model, as the modeller must learn a mapping from observations to distributions approximating the posterior. Previous approaches have involved inverting the dependency structure in a heuristic way that fails to capture these dependencies correctly, thereby limiting the achievable accuracy of the resulting approximations. We introduce an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model. Such inverses have two crucial properties: (a) they do not encode any independence assertions that are absent from the model and; (b) they are local maxima for the number of true independencies encoded.


Faithful Inversion of Generative Models for Effective Amortized Inference

Neural Information Processing Systems

Inference amortization methods share information across multiple posterior-inference problems, allowing each to be carried out more efficiently. Generally, they require the inversion of the dependency structure in the generative model, as the modeller must learn a mapping from observations to distributions approximating the posterior. Previous approaches have involved inverting the dependency structure in a heuristic way that fails to capture these dependencies correctly, thereby limiting the achievable accuracy of the resulting approximations. We introduce an algorithm for faithfully, and minimally, inverting the graphical model structure of any generative model. Such inverses have two crucial properties: (a) they do not encode any independence assertions that are absent from the model and; (b) they are local maxima for the number of true independencies encoded. We prove the correctness of our approach and empirically show that the resulting minimally faithful inverses lead to better inference amortization than existing heuristic approaches.