California's supervolcano that has the power to bury Los Angeles in more than 3,000 feet of ash is showing signs of activity. Scientists at the California Institute of Technology (Caltech) identified over 2,000 earthquakes rumbling throughout the Long Valley Caldera in recent years. The team conducted a new investigation to see if the seismic activity was a sign of impending doom or that the risk of a massive eruption was decreasing. Caltech researchers created detailed underground images of the caldera, finding that the recent seismic activity results from fluids and gases released as the area cools off and settles down. The study author Zhongwen Zhan said: 'We don't think the region is gearing up for another supervolcanic eruption, but the cooling process may release enough gas and liquid to cause earthquakes and small eruptions. 'For example, in May 1980, there were four magnitude 6 earthquakes in the region alone.'
Most canals that cut through ninth-century Baghdad are a muddy brown, thick with the silt churned up by the poles of passing punts. But there's one inlet in the city where the water is stained red, a persistent crimson cloud that doesn't shift with the stream's eddies. Follow the red-running gutters through the sidestreets shouldered by clay-brick houses, and you'll find not an abattoir but a dye factory. Between lines of fabrics hung up to dry, workers sweat as they stir cloth in great pots of coloured water, occasionally stopping to mop their brows. After a palace burglary goes wrong, you are forced to flee your village and join the Hidden Ones, taking up their fight against the Order, a secretive club who are worming their way into Baghdad's upper echelons of power.
Ahead of dropping the paid Phantom Liberty expansion next week, CD Projekt Red will release a major update for Cyberpunk 2077 on September 21. The patch will overhaul a lot of the game's systems, switch up the skill trees and make other sweeping changes. There should be a significant visual upgrade for many PC players as well. As of Thursday, Cyberpunk 2077 will be the first game to support DLSS 3.5, the latest version of NVIDIA's upscaling tech. DLSS 3.5 has a feature called Ray Reconstruction, which uses AI to upgrade the ray-traced elements of a game.
Nvidia created its Deep Learning Super Sampling (DLSS) technology to help improve performance in games when you turn on ultra-strenuous ray traced visuals, all the way back when both ray tracing and DLSS were introduced alongside the GeForce RTX 20-series. DLSS 2 greatly improved the visual quality of upscaled images, while DLSS 3 added AI-generated frames to boost performance even more. Now, Nvidia returns to DLSS's ray tracing roots with DLSS 3.5, introduced today at Gamescom in Germany. While DLSS 3 boosted performance, DLSS 3.5's "Ray Reconstruction" aims to improve the visual quality of upscaled, ray traced games, specifically by turning Nvidia's AI models on a critical process called "denoising." Ray tracing is limited by the number of rays a GPU can "cast" into a given scene, to create the data needed for the realistic lighting effects.
Some redditors seem very excited about a new World of Warcraft feature called Glorbo, which some believe will "make a huge impact on the game." Their palpable enthusiasm for Glorbo caught the attention of a blog named The Portal, which publishes "gaming content powered by Z League," an app that aims to bring gamers together. The Portal appears to be using AI to scrape Reddit posts and turn them into content. Redditor u/kaefer_kriegerin noticed that The Portal was seemingly turning discussions from some gaming subreddits into blog posts. They decided to try and trick the content farm into covering a fake WoW feature. The ruse was a success.
We investigate two novel mixed robust/average-case submodular data partitioning problems that we collectively call Submodular Partitioning. These problems generalize purely robust instances of the problem, namely max-min submodular fair allocation (SFA) [12] and min-max submodular load balancing (SLB) [25], and also average-case instances, that is the submodular welfare problem (SWP) [26] and submodular multiway partition (SMP) [5]. While the robust versions have been studied in the theory community [11, 12, 16, 25, 26], existing work has focused on tight approximation guarantees, and the resultant algorithms are not generally scalable to large real-world applications. This is in contrast to the average case, where most of the algorithms are scalable. In the present paper, we bridge this gap, by proposing several new algorithms (including greedy, majorization-minimization, minorization-maximization, and relaxation algorithms) that not only scale to large datasets but that also achieve theoretical approximation guarantees comparable to the state-of-the-art. We moreover provide new scalable algorithms that apply to additive combinations of the robust and average-case objectives. We show that these problems have many applications in machine learning (ML), including data partitioning and load balancing for distributed ML, data clustering, and image segmentation. We empirically demonstrate the efficacy of our algorithms on real-world problems involving data partitioning for distributed optimization (of convex and deep neural network objectives), and also purely unsupervised image segmentation.
We study contextual bandits with budget and time constraints, referred to as constrained contextual bandits. The time and budget constraints significantly complicate the exploration and exploitation tradeoff because they introduce complex coupling among contexts over time. To gain insight, we first study unit-cost systems with known context distribution. When the expected rewards are known, we develop an approximation of the oracle, referred to Adaptive-Linear-Programming (ALP), which achieves near-optimality and only requires the ordering of expected rewards. With these highly desirable features, we then combine ALP with the upper-confidence-bound (UCB) method in the general case where the expected rewards are unknown a priori. We show that the proposed UCB-ALP algorithm achieves logarithmic regret except for certain boundary cases. Further, we design algorithms and obtain similar regret bounds for more general systems with unknown context distribution and heterogeneous costs. To the best of our knowledge, this is the first work that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, this work also sheds light on the study of computationally efficient algorithms for general constrained contextual bandits.
We propose a robust portfolio optimization approach based on quantile statistics. The proposed method is robust to extreme events in asset returns, and accommodates large portfolios under limited historical data. Specifically, we show that the risk of the estimated portfolio converges to the oracle optimal risk with parametric rate under weakly dependent asset returns. The theory does not rely on higher order moment assumptions, thus allowing for heavy-tailed asset returns. Moreover, the rate of convergence quantifies that the size of the portfolio under management is allowed to scale exponentially with the sample size of the historical data. The empirical effectiveness of the proposed method is demonstrated under both synthetic and real stock data. Our work extends existing ones by achieving robustness in high dimensions, and by allowing serial dependence.
Scalable and effective exploration remains a key challenge in reinforcement learning (RL). While there are methods with optimality guarantees in the setting of discrete state and action spaces, these methods cannot be applied in high-dimensional deep RL scenarios. As such, most contemporary RL relies on simple heuristics such as ɛ-greedy exploration or adding Gaussian noise to the controls. This paper introduces Variational Information Maximizing Exploration (VIME), an exploration strategy based on maximization of information gain about the agent's belief of environment dynamics. We propose a practical implementation, using variational inference in Bayesian neural networks which efficiently handles continuous state and action spaces. VIME modifies the MDP reward function, and can be applied with several different underlying RL algorithms. We demonstrate that VIME achieves significantly better performance compared to heuristic exploration methods across a variety of continuous control tasks and algorithms, including tasks with very sparse rewards.