Graphical model inference: Sequential Monte Carlo meets deterministic approximations
Approximate inference in probabilistic graphical models (PGMs) can be grouped into deterministic methods and Monte-Carlo-based methods. The former can often provide accurate and rapid inferences, but are typically associated with biases that are hard to quantify. The latter enjoy asymptotic consistency, but can suffer from high computational costs. In this paper we present a way of bridging the gap between deterministic and stochastic inference. Specifically, we suggest an efficient sequential Monte Carlo (SMC) algorithm for PGMs which can leverage the output from deterministic inference methods. While generally applicable, we show explicitly how this can be done with loopy belief propagation, expectation propagation, and Laplace approximations. The resulting algorithm can be viewed as a post-correction of the biases associated with these methods and, indeed, numerical results show clear improvements over the baseline deterministic methods as well as over plain SMC.
Iran's deadly drone arsenal is a 'wake-up call for America': Expert warns US defenses may be unprepared for swarm attacks
LA school hid student's gender switch from parents before teen's suicide, lawsuit claims I looked like a monster after a car accident burned off my face... but a pioneering face transplant gave me my life back. America's heartland to see huge population plunge by 2050 - professor has a controversial visa plan to fix it Insufferable blowhard Stephen Colbert is being taken out like the trash... and thank God! What he's done is so diabolical: MAUREEN CALLAHAN JFK Jr's mortifying night of phone sex... day Sarah Jessica Parker ditched her underwear to seduce him in public... and the girlfriend he REALLY wanted to marry: All the women before Carolyn Truth about'super secretive' Michael B. Jordan's love life... and real reason he is perpetually single: Years of private'heartache' and'loneliness' laid bare I'm raising my two-year-old on a cruise ship These are the harsh realities of life at sea Extramarital sex with witches, cursed bloodlines and possessed politicians: DC's chief exorcist reveals the potent stench of evil among America's elite I ignored my itchy legs and cold-like symptoms. Then doctors discovered something horrifying on a scan... I'm terrified I'm going to die I made a 34-page dress code for my wedding guests... critics say I'm controlling but I want it to be perfect Trump's religious inner circle implodes as beauty queen's firing sparks revolt... and'spiritual adviser' faces shocking Israel claims China's sinister'Trojan horse' that has already breached America's gates and scooped up YOUR data We fled Trump to chase the REAL American dream in the most idyllic European hotspot... here's why we're coming back to a red state Harry and Meghan explode at claim the Queen accused Markle of'brainwashing' Iran's deadly drone arsenal is a'wake-up call for America': Expert warns US defenses may be unprepared for swarm attacks A US military drone expert has warned that Iranian attack drones could potentially slip through America's defenses and strike targets on US soil. Brett Velicovich, a former US Army intelligence and special operations soldier who spent years using drones to hunt ISIS leaders before founding drone company PowerUs, said the threat comes from a new type of warfare that the US is still struggling to defend against. 'These new asymmetric threats, where you've got low-cost, cheap, small drones, in some cases, that are able to be sent in massive waves, don't have the same signature of an intercontinental ballistic missile,' Velicovich explained.
- Asia > Middle East > Israel (0.24)
- Asia > China (0.24)
- North America > Canada > Alberta (0.14)
- (16 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military (1.00)
Hamiltonian Variational Auto-Encoder
Variational Auto-Encoders (VAE) have become very popular techniques to perform inference and learning in latent variable models as they allow us to leverage the rich representational power of neural networks to obtain flexible approximations of the posterior of latent variables as well as tight evidence lower bounds (ELBO). Combined with stochastic variational inference, this provides a methodology scaling to large datasets. However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest. While the use of Markov chain Monte Carlo (MCMC) techniques such as Hamiltonian Monte Carlo (HMC) has been previously suggested to achieve this [23, 26], the proposed methods require specifying reverse kernels which have a large impact on performance. Additionally, the resulting unbiased estimator of the ELBO for most MCMC kernels is typically not amenable to the reparameterization trick. We show here how to optimally select reverse kernels in this setting and, by building upon Hamiltonian Importance Sampling (HIS) [17], we obtain a scheme that provides low-variance unbiased estimators of the ELBO and its gradients using the reparameterization trick. This allows us to develop a Hamiltonian Variational Auto-Encoder (HVAE). This method can be re-interpreted as a target-informed normalizing flow [20] which, within our context, only requires a few evaluations of the gradient of the sampled likelihood and trivial Jacobian calculations at each iteration.
The best Kindles
Amazon's eReaders are best-in-class, and offer a legitimate opportunity for distraction-free reading. We may earn revenue from the products available on this page and participate in affiliate programs. The right Kindle will reignite your love of reading. Using a Kindle may seem unnecessary in a world where reading books, articles, and any other text on a phone or tablet is easy. Carrying around a dedicated mono-tasking device will add weight to your load, and it's another gadget to keep track of and charge. Yet Kindles remain popular because they only have one job and do it very well: let you carry and consume the stories that captivate you. A Kindle's e-ink screen won't reflect the sun when reading outdoors, unlike the reflective LCD displays used on phones and tablets.
- Information Technology > Artificial Intelligence (0.69)
- Information Technology > Hardware (0.67)
- Information Technology > Communications > Mobile (0.46)
Inferring Networks From Random Walk-Based Node Similarities
Digital presence in the world of online social media entails significant privacy risks. In this work we consider a privacy threat to a social network in which an attacker has access to a subset of random walk-based node similarities, such as effective resistances (i.e., commute times) or personalized PageRank scores. Using these similarities, the attacker seeks to infer as much information as possible about the network, including unknown pairwise node similarities and edges. For the effective resistance metric, we show that with just a small subset of measurements, one can learn a large fraction of edges in a social network. We also show that it is possible to learn a graph which accurately matches the underlying network on all other effective resistances.
- North America > United States > Montana (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- North America > United States > California > Los Angeles County > Beverly Hills (0.04)
- Media > News (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.31)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger
We formulate the problem of defogging as state estimation and future state prediction from previous, partial observations in the context of real-time strategy games. We propose to employ encoder-decoder neural networks for this task, and introduce proxy tasks and baselines for evaluation to assess their ability of capturing basic game rules and high-level dynamics. By combining convolutional neural networks and recurrent networks, we exploit spatial and sequential correlations and train well-performing models on a large dataset of human games of StarCraft: Brood War. Finally, we demonstrate the relevance of our models to downstream tasks by applying them for enemy unit prediction in a state-of-the-art, rule-based StarCraft bot. We observe improvements in win rates against several strong community bots.
When do random forests fail?
Random forests are learning algorithms that build large collections of random trees and make predictions by averaging the individual tree predictions. In this paper, we consider various tree constructions and examine how the choice of parameters affects the generalization error of the resulting random forests as the sample size goes to infinity. We show that subsampling of data points during the tree construction phase is important: Forests can become inconsistent with either no subsampling or too severe subsampling. As a consequence, even highly randomized trees can lead to inconsistent forests if no subsampling is used, which implies that some of the commonly used setups for random forests can be inconsistent. As a second consequence we can show that trees that have good performance in nearest-neighbor search can be a poor choice for random forests.
7 Kindle settings you should change
Make sure your e-reader is set up exactly the way you want it. There are plenty of ways to tweak how your Kindle works. Breakthroughs, discoveries, and DIY tips sent six days a week. All of the Amazon Kindle models are intentionally designed to be straightforward to use. Grab your Kindle, tap the power button, and you're back reading from the place you left off (it's almost as simple as opening a real book).
- Information Technology > Artificial Intelligence (0.70)
- Information Technology > Communications > Mobile (0.53)
The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network
An important factor contributing to the success of deep learning has been the remarkable ability to optimize large neural networks using simple first-order optimization algorithms like stochastic gradient descent. While the efficiency of such methods depends crucially on the local curvature of the loss surface, very little is actually known about how this geometry depends on network architecture and hyperparameters. In this work, we extend a recently-developed framework for studying spectra of nonlinear random matrices to characterize an important measure of curvature, namely the eigenvalues of the Fisher information matrix. We focus on a single-hidden-layer neural network with Gaussian data and weights and provide an exact expression for the spectrum in the limit of infinite width. We find that linear networks suffer worse conditioning than nonlinear networks and that nonlinear networks are generically non-degenerate. We also predict and demonstrate empirically that by adjusting the nonlinearity, the spectrum can be tuned so as to improve the efficiency of first-order optimization methods.