Forward Modeling for Partial Observation Strategy Games - A StarCraft Defogger
We formulate the problem of defogging as state estimation and future state prediction from previous, partial observations in the context of real-time strategy games. We propose to employ encoder-decoder neural networks for this task, and introduce proxy tasks and baselines for evaluation to assess their ability of capturing basic game rules and high-level dynamics. By combining convolutional neural networks and recurrent networks, we exploit spatial and sequential correlations and train well-performing models on a large dataset of human games of StarCraft: Brood War. Finally, we demonstrate the relevance of our models to downstream tasks by applying them for enemy unit prediction in a state-of-the-art, rule-based StarCraft bot. We observe improvements in win rates against several strong community bots.
When do random forests fail?
Random forests are learning algorithms that build large collections of random trees and make predictions by averaging the individual tree predictions. In this paper, we consider various tree constructions and examine how the choice of parameters affects the generalization error of the resulting random forests as the sample size goes to infinity. We show that subsampling of data points during the tree construction phase is important: Forests can become inconsistent with either no subsampling or too severe subsampling. As a consequence, even highly randomized trees can lead to inconsistent forests if no subsampling is used, which implies that some of the commonly used setups for random forests can be inconsistent. As a second consequence we can show that trees that have good performance in nearest-neighbor search can be a poor choice for random forests.
7 Kindle settings you should change
Make sure your e-reader is set up exactly the way you want it. There are plenty of ways to tweak how your Kindle works. Breakthroughs, discoveries, and DIY tips sent six days a week. All of the Amazon Kindle models are intentionally designed to be straightforward to use. Grab your Kindle, tap the power button, and you're back reading from the place you left off (it's almost as simple as opening a real book).
- Information Technology > Artificial Intelligence (0.70)
- Information Technology > Communications > Mobile (0.53)
The Spectrum of the Fisher Information Matrix of a Single-Hidden-Layer Neural Network
An important factor contributing to the success of deep learning has been the remarkable ability to optimize large neural networks using simple first-order optimization algorithms like stochastic gradient descent. While the efficiency of such methods depends crucially on the local curvature of the loss surface, very little is actually known about how this geometry depends on network architecture and hyperparameters. In this work, we extend a recently-developed framework for studying spectra of nonlinear random matrices to characterize an important measure of curvature, namely the eigenvalues of the Fisher information matrix. We focus on a single-hidden-layer neural network with Gaussian data and weights and provide an exact expression for the spectrum in the limit of infinite width. We find that linear networks suffer worse conditioning than nonlinear networks and that nonlinear networks are generically non-degenerate. We also predict and demonstrate empirically that by adjusting the nonlinearity, the spectrum can be tuned so as to improve the efficiency of first-order optimization methods.
SPIDER: Near-Optimal Non-Convex Optimization via Stochastic Path-Integrated Differential Estimator
In this paper, we propose a new technique named \textit{Stochastic Path-Integrated Differential EstimatoR} (SPIDER), which can be used to track many deterministic quantities of interests with significantly reduced computational cost. Combining SPIDER with the method of normalized gradient descent, we propose SPIDER-SFO that solve non-convex stochastic optimization problems using stochastic gradients only. We provide a few error-bound results on its convergence rates. Specially, we prove that the SPIDER-SFO algorithm achieves a gradient computation cost of $\mathcal{O}\left( \min( n^{1/2} \epsilon^{-2}, \epsilon^{-3}) \right)$ to find an $\epsilon$-approximate first-order stationary point. In addition, we prove that SPIDER-SFO nearly matches the algorithmic lower bound for finding stationary point under the gradient Lipschitz assumption in the finite-sum setting.
The world's oldest wild bird has a new grandchick
Environment Animals Wildlife Birds The world's oldest wild bird has a new grandchick Biologists have been tracking Wisdom, the roughly 75-year-old Laysan albatross, since the 1950s. Albatross chicks are getting stronger. Breakthroughs, discoveries, and DIY tips sent six days a week. The U.S. Fish and Wildlife Service is shining a light on a new member of a famous feathered family--that of the world's oldest known breeding bird, a Laysan albatross called Wisdom. The agency posted a video on social media featuring a scruffy looking hatchling seemingly yawning as it hangs out in the sand in close contact with a giant bird --presumably one of its parents.
- Europe (0.15)
- Oceania > United States > United States Minor Outlying Islands > Midway Islands (0.08)
- Pacific Ocean > North Pacific Ocean (0.05)
- (4 more...)
A Model for Learned Bloom Filters and Optimizing by Sandwiching
Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to determine a function that models the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, with the following outcomes: (1) we clarify what guarantees can and cannot be associated with such a structure; (2) we show how to estimate what size the learning function must obtain in order to obtain improved performance; (3) we provide a simple method, sandwiching, for optimizing learned Bloom filters; and (4) we propose a design and analysis approach for a learned Bloomier filter, based on our modeling approach.
Deep State Space Models for Unconditional Word Generation
Autoregressive feedback is considered a necessity for successful unconditional text generation using stochastic sequence models. However, such feedback is known to introduce systematic biases into the training process and it obscures a principle of generation: committing to global information and forgetting local nuances. We show that a non-autoregressive deep state space model with a clear separation of global and local uncertainty can be built from only two ingredients: An independent noise source and a deterministic transition function. Recent advances on flow-based variational inference can be used to train an evidence lower-bound without resorting to annealing, auxiliary losses or similar measures. The result is a highly interpretable generative model on par with comparable auto-regressive models on the task of word generation.
Autonomous firefighting robot can drive straight into a 1,000 degree blaze
The tank-like vehicle is already being tested in South Korea. The robot sprays itself with water to stay cool and uses thermal cameras to see through smoke. Breakthroughs, discoveries, and DIY tips sent six days a week. Firefighters in South Korea will soon start deploying alongside a massive, six-wheeled, self-cooling autonomous robot that could help keep them safe. Hyundai recently revealed the new, driverless ground drone, built atop a chassis initially intended for military use and looking like something out of a sci-fi film.
- Asia > South Korea (0.84)
- North America > United States > New York (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Asia > Middle East > Iran (0.74)
- Asia > Middle East > Saudi Arabia (0.28)
- North America > United States > Texas (0.04)
- (5 more...)
- Media > News (1.00)
- Information Technology (1.00)
- Government > Military > Cyberwarfare (0.93)
- Government > Regional Government > North America Government > United States Government (0.69)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.47)