Goto

Collaborating Authors

 resolution


Learning Abstract Options

Neural Information Processing Systems

Building systems that autonomously create temporal abstractions from data is a key challenge in scaling learning and planning in reinforcement learning. One popular approach for addressing this challenge is the options framework (Sutton et al., 1999). However, only recently in (Bacon et al., 2017) was a policy gradient theorem derived for online learning of general purpose options in an end to end fashion. In this work, we extend previous work on this topic that only focuses on learning a two-level hierarchy including options and primitive actions to enable learning simultaneously at multiple resolutions in time. We achieve this by considering an arbitrarily deep hierarchy of options where high level temporally extended options are composed of lower level options with finer resolutions in time. We extend results from (Bacon et al., 2017) and derive policy gradient theorems for a deep hierarchy of options. Our proposed hierarchical option-critic architecture is capable of learning internal policies, termination conditions, and hierarchical compositions over options without the need for any intrinsic rewards or subgoals. Our empirical results in both discrete and continuous environments demonstrate the efficiency of our framework.


FishNet: A Versatile Backbone for Image, Region, and Pixel Level Prediction

Neural Information Processing Systems

The basic principles in designing convolutional neural network (CNN) structures for predicting objects on different levels, e.g., image-level, region-level, and pixel-level, are diverging. Generally, network structures designed specifically for image classification are directly used as default backbone structure for other tasks including detection and segmentation, but there is seldom backbone structure designed under the consideration of unifying the advantages of networks designed for pixel-level or region-level predicting tasks, which may require very deep features with high resolution. Towards this goal, we design a fish-like network, called FishNet. In FishNet, the information of all resolutions is preserved and refined for the final task. Besides, we observe that existing works still cannot \emph{directly} propagate the gradient information from deep layers to shallow layers. Our design can better handle this problem. Extensive experiments have been conducted to demonstrate the remarkable performance of the FishNet. In particular, on ImageNet-1k, the accuracy of FishNet is able to surpass the performance of DenseNet and ResNet with fewer parameters. FishNet was applied as one of the modules in the winning entry of the COCO Detection 2018 challenge.


Joint Sub-bands Learning with Clique Structures for Wavelet Domain Super-Resolution

Neural Information Processing Systems

Convolutional neural networks (CNNs) have recently achieved great success in single-image super-resolution (SISR). However, these methods tend to produce over-smoothed outputs and miss some textural details. To solve these problems, we propose the Super-Resolution CliqueNet (SRCliqueNet) to reconstruct the high resolution (HR) image with better textural details in the wavelet domain. The proposed SRCliqueNet firstly extracts a set of feature maps from the low resolution (LR) image by the clique blocks group. Then we send the set of feature maps to the clique up-sampling module to reconstruct the HR image. The clique up-sampling module consists of four sub-nets which predict the high resolution wavelet coefficients of four sub-bands. Since we consider the edge feature properties of four sub-bands, the four sub-nets are connected to the others so that they can learn the coefficients of four sub-bands jointly. Finally we apply inverse discrete wavelet transform (IDWT) to the output of four sub-nets at the end of the clique up-sampling module to increase the resolution and reconstruct the HR image. Extensive quantitative and qualitative experiments on benchmark datasets show that our method achieves superior performance over the state-of-the-art methods.


Xbox Ally X gets smoother gameplay with AutoSR update

PCWorld

PCWorld reports that Microsoft's Automatic Super Resolution (AutoSR) technology is coming to Xbox Ally X handheld consoles to enhance gaming performance. AutoSR uses AI-powered upscaling to render games at lower resolutions then upscales them, allowing less powerful GPUs to achieve higher frame rates. A public preview of AutoSR for the AMD-powered Ally X is expected in April, promising smoother gameplay experiences for users. In a month, owners of Xbox Ally X handheld consoles will see their frame rates jump upwards, as Microsoft begins supporting the Automatic Super Resolution (AutoSR) tech on the console. Microsoft snuck in a reference to the technology as part of a presentation at the Game Developer Conference, where the company pitched features from its upcoming Project Helix console as well as AI enhancements coming to Microsoft's DirectX API . Essentially, AutoSR is an upscaling technology, originally designed for use with the Qualcomm Snapdragon X1 or X2 Elite processor, according to Microsoft.


Apple MacBook Pro Review (M5 Max, 16-inch): The Fastest MacBook Yet

WIRED

A more exciting MacBook Pro is waiting in the wings, but the M5 Max shows the continued success of Apple Silicon. The M5 Max is a monster performer. Gaming is surprisingly smooth, and on-device AI speeds up. The display, keyboard, ports, and speakers remain top-of-class. The MacBook Pro is in its awkward era.


Carvalho probe looms over LAUSD meeting as labor talks, charter renewal demand attention

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Supporters of the Green Dot charter at Locke High intently watch the debate over the school's future. On Tuesday, the board narrowly voted to close the school at the end of the year. This is read by an automated voice. Please report any issues or inconsistencies here .


Random Forests as Statistical Procedures: Design, Variance, and Dependence

O'Connell, Nathaniel S.

arXiv.org Machine Learning

We develop a finite-sample, design-based theory for random forests in which each tree is a randomized conditional predictor acting on fixed covariates and the forest is their Monte Carlo average. An exact variance identity separates Monte Carlo error from a covariance floor that persists under infinite aggregation. The floor arises through two mechanisms: observation reuse, where the same training outcomes receive weight across multiple trees, and partition alignment, where independently generated trees discover similar conditional prediction rules. We prove the floor is strictly positive under minimal conditions and show that alignment persists even when sample splitting eliminates observation overlap entirely. We introduce procedure-aligned synthetic resampling (PASR) to estimate the covariance floor, decomposing the total prediction uncertainty of a deployed forest into interpretable components. For continuous outcomes, resulting prediction intervals achieve nominal coverage with a theoretically guaranteed conservative bias direction. For classification forests, the PASR estimator is asymptotically unbiased, providing the first pointwise confidence intervals for predicted conditional probabilities from a deployed forest. Nominal coverage is maintained across a range of design configurations for both outcome types, including high-dimensional settings. The underlying theory extends to any tree-based ensemble with an exchangeable tree-generating mechanism.