Goto

Collaborating Authors

Results


A Bayesian model for a simulated meta-analysis

#artificialintelligence

There are multiple ways to estimate a Stan model in R, but I choose to build the Stan code directly rather than using the brms or rstanarm packages. In the Stan code, we need to define the data structure, specify the parameters, specify any transformed parameters (which are just a function of the parameters), and then build the model – which includes laying out the prior distributions as well as the likelihood. In this case, the model is slightly different from what was presented in the context of a mixed effects model. The key difference is that there are prior distributions on \(\Delta\) and \(\tau\), introducing an additional level of uncertainty into the estimate. This added measure of uncertainty is a strength of the Bayesian approach.


Bayes Theorem

#artificialintelligence

Both frequentist and Bayesian probability have a role to play in machine learning. For example, if dealing with truly random and discrete variables, such as landing a six in a die roll, the traditional approach of simply calculating the odds (frequency) is the fastest way to model a likely outcome. However, if the six keeps coming up far more often than the predicated 1/6 odds, only Bayesian probability would take that new observation into account and increase the confidence level that someone is playing with loaded dice.


[D] Paper Explained - Deep Ensembles: A Loss Landscape Perspective (Full Video Analysis)

#artificialintelligence

Surprisingly, they outperform Bayesian Networks, which are - in theory - doing the same thing. This paper investigates how Deep Ensembles are especially suited to capturing the non-convex loss landscape of neural networks.


Inferring change points in the spread of COVID-19 reveals the effectiveness of interventions

Science

From February to April 2020, many countries introduced variations on social distancing measures to slow the ravages of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Publicly available data show that Germany has been particularly successful in minimizing death rates. Dehning et al. quantified three governmental interventions introduced to control the outbreak. The authors predicted that the third governmental intervention—a strict contact ban since 22 March—switched incidence from growth to decay. They emphasize that relaxation of controls must be done carefully, not only because there is a 2-week lag between a measure being enacted and the effect on case reports but also because the three measures used in Germany only just kept virus spread below the growth threshold. Science , this issue p. [eabb9789][1] ### INTRODUCTION When faced with the outbreak of a novel epidemic such as coronavirus disease 2019 (COVID-19), rapid response measures are required by individuals, as well as by society as a whole, to mitigate the spread of the virus. During this initial, time-critical period, neither the central epidemiological parameters nor the effectiveness of interventions such as cancellation of public events, school closings, or social distancing is known. ### RATIONALE As one of the key epidemiological parameters, we inferred the spreading rate λ from confirmed SARS-CoV-2 infections using the example of Germany. We apply Bayesian inference based on Markov chain Monte Carlo sampling to a class of compartmental models [susceptible-infected-recovered (SIR)]. Our analysis characterizes the temporal change of the spreading rate and allows us to identify potential change points. Furthermore, it enables short-term forecast scenarios that assume various degrees of social distancing. A detailed description is provided in the accompanying paper, and the models, inference, and forecasts are available on GitHub ([https://github.com/Priesemann-Group/covid19\_inference\_forecast][2]). Although we apply the model to Germany, our approach can be readily adapted to other countries or regions. ### RESULTS In Germany, interventions to contain the COVID-19 outbreak were implemented in three steps over 3 weeks: (i) Around 9 March 2020, large public events such as soccer matches were canceled; (ii) around 16 March 2020, schools, childcare facilities, and many stores were closed; and (iii) on 23 March 2020, a far-reaching contact ban ( Kontaktsperre ) was imposed by government authorities; this included the prohibition of even small public gatherings as well as the closing of restaurants and all nonessential stores. From the observed case numbers of COVID-19, we can quantify the impact of these measures on the disease spread using change point analysis. Essentially, we find that at each change point the spreading rate λ decreased by ~40%. At the first change point, assumed around 9 March 2020, λ decreased from 0.43 to 0.25, with 95% credible intervals (CIs) of [0.35, 0.51] and [0.20, 0.30], respectively. At the second change point, assumed around 16 March 2020, λ decreased to 0.15 (CI [0.12, 0.20]). Both changes in λ slowed the spread of the virus but still implied exponential growth (see red and orange traces in the figure). To contain the disease spread, i.e., to turn exponential growth into a decline of new cases, the spreading rate has to be smaller than the recovery rate μ = 0.13 (CI [0.09, 0.18]). This critical transition was reached with the third change point, which resulted in λ = 0.09 (CI [0.06, 0.13]; see blue trace in the figure), assumed around 23 March 2020. From the peak position of daily new cases, one could conclude that the transition from growth to decline was already reached at the end of March. However, the observed transient decline can be explained by a short-term effect that originates from a sudden change in the spreading rate (see Fig. 2C in the main text). As long as interventions and the concurrent individual behavior frequently change the spreading rate, reliable short- and long-term forecasts are very difficult. As the figure shows, the three example scenarios (representing the effects up to the first, second, and third change point) quickly diverge from each other and, consequently, span a considerable range of future case numbers. Inference and subsequent forecasts are further complicated by the delay of ~2 weeks between an intervention and the first useful estimates of the new λ (which are derived from the reported case numbers). Because of this delay, any uncertainty in the magnitude of social distancing in the previous 2 weeks can have a major impact on the case numbers in the subsequent 2 weeks. Beyond 2 weeks, the case numbers depend on our future behavior, for which we must make explicit assumptions. In sum, future interventions (such as lifting restrictions) should be implemented cautiously to respect the delayed visibility of their effects. ### CONCLUSION We developed a Bayesian framework for the spread of COVID-19 to infer central epidemiological parameters and the timing and magnitude of intervention effects. With such an approach, the effects of interventions can be assessed in a timely manner. Future interventions and lifting of restrictions can be modeled as additional change points, enabling short-term forecasts for case numbers. In general, our approach may help to infer the efficiency of measures taken in other countries and inform policy-makers about tightening, loosening, and selecting appropriate measures for containment of COVID-19. ![Figure][3] Bayesian inference of SIR model parameters from daily new cases of COVID-19 enables us to assess the impact of interventions. In Germany, three interventions (mild social distancing, strong social distancing, and contact ban) were enacted consecutively (circles). Colored lines depict the inferred models that include the impact of one, two, or three interventions (red, orange, or green, respectively, with individual data cutoff) or all available data until 21 April 2020 (blue). Forecasts (dashed lines) show how case numbers would have developed without the effects of the subsequent change points. Note the delay between intervention and first possible inference of parameters caused by the reporting delay and the necessary accumulation of evidence (gray arrows). Shaded areas indicate 50% and 95% Bayesian credible intervals. As coronavirus disease 2019 (COVID-19) is rapidly spreading across the globe, short-term modeling forecasts provide time-critical information for decisions on containment and mitigation strategies. A major challenge for short-term forecasts is the assessment of key epidemiological parameters and how they change when first interventions show an effect. By combining an established epidemiological model with Bayesian inference, we analyzed the time dependence of the effective growth rate of new infections. Focusing on COVID-19 spread in Germany, we detected change points in the effective growth rate that correlate well with the times of publicly announced interventions. Thereby, we could quantify the effect of interventions and incorporate the corresponding change points into forecasts of future scenarios and case numbers. Our code is freely available and can be readily adapted to any country or region. [1]: /lookup/doi/10.1126/science.abb9789 [2]: https://github.com/Priesemann-Group/covid19_inference_forecast [3]: pending:yes


Probabilistic Programming and Bayesian Inference for Time Series Analysis and Forecasting

#artificialintelligence

As described in [1][2], time series data includes many kinds of real experimental data taken from various domains such as finance, medicine, scientific research (e.g., global warming, speech analysis, earthquakes), etc. Time series forecasting has many real applications in various areas such as forecasting of business (e.g., sales, stock), weather, decease, and others [2]. Statistical modeling and inference (e.g., ARIMA model) [1][2] is one of the popular methods for time series analysis and forecasting. The philosophy of Bayesian inference is to consider probability as a measure of believability in an event [3][4][5] and use Bayes' theorem to update the probability as more evidence or information becomes available, while the philosophy of frequentist inference considers probability as the long-run frequency of events [3]. Generally speaking, we can use the Frequentist inference only when a large number of data samples are available.


Regularization -- Part 2

#artificialintelligence

These are the lecture notes for FAU's YouTube Lecture "Deep Learning". This is a full transcript of the lecture video & matching slides. We hope, you enjoy this as much as the videos. Of course, this transcript was created with deep learning techniques largely automatically and only minor manual modifications were performed. If you spot mistakes, please let us know!


Causal AI & Bayesian Networks

#artificialintelligence

We are all familiar with the dictum that "correlation does not imply causation". Furthermore, given a data file with samples of two variables x and z, we all know how to calculate the correlation between x and z. But it's only an elite minority, the few, the proud, the Bayesian Network aficionados, that know how to calculate the causal connection between x and z. Neural Net aficionados are incapable of doing this. Their Neural nets are just too wimpy to cut it.



Generative Adversarial Networks (GANs) & Bayesian Networks

#artificialintelligence

Generative Adversarial Networks (GANs) software is software for producing forgeries and imitations of data (aka synthetic data, fake data). Human beings have been making fakes, with good or evil intent, of almost everything they possibly can, since the beginning of the human race. Thus, perhaps not too surprisingly, GAN software has been widely used since it was first proposed in this amazingly recent 2014 paper. To gauge how widely GAN software has been used so far, see, for example, this 2019 article entitled "18 Impressive Applications of Generative Adversarial Networks (GANs)" Sounds (voices, music,...), Images (realistic pictures, paintings, drawings, handwriting, ...), Text,etc. The forgeries can be tweaked so that they range from being very similar to the originals, to being whimsical exaggerations thereof.


Bayes' Theorem in Layman's Terms

#artificialintelligence

If you have difficulty in understanding Bayes' theorem, trust me you are not alone. In this tutorial, I'll help you to cross that bridge step by step. Let's consider Alex and Brenda are two people in your office, When you are working you saw someone walked in front of you, and you didn't notice who is she/he. Now I'll give you extra information, Let's calculate the probabilities with this new information, Probability that Alex is the person passed by is 2/5 i.e, Probability that Brenda is the person passed by is 3/5 i.e, Probabilities that we are calculated before the new information are called Prior, and probabilities that we are calculated after the new information are called Posterior. Consider a scenario where, Alex comes to the office 3 days a week, and Brenda comes to the office 1 day a week.