Most canals that cut through ninth-century Baghdad are a muddy brown, thick with the silt churned up by the poles of passing punts. But there's one inlet in the city where the water is stained red, a persistent crimson cloud that doesn't shift with the stream's eddies. Follow the red-running gutters through the sidestreets shouldered by clay-brick houses, and you'll find not an abattoir but a dye factory. Between lines of fabrics hung up to dry, workers sweat as they stir cloth in great pots of coloured water, occasionally stopping to mop their brows. After a palace burglary goes wrong, you are forced to flee your village and join the Hidden Ones, taking up their fight against the Order, a secretive club who are worming their way into Baghdad's upper echelons of power.
Nvidia created its Deep Learning Super Sampling (DLSS) technology to help improve performance in games when you turn on ultra-strenuous ray traced visuals, all the way back when both ray tracing and DLSS were introduced alongside the GeForce RTX 20-series. DLSS 2 greatly improved the visual quality of upscaled images, while DLSS 3 added AI-generated frames to boost performance even more. Now, Nvidia returns to DLSS's ray tracing roots with DLSS 3.5, introduced today at Gamescom in Germany. While DLSS 3 boosted performance, DLSS 3.5's "Ray Reconstruction" aims to improve the visual quality of upscaled, ray traced games, specifically by turning Nvidia's AI models on a critical process called "denoising." Ray tracing is limited by the number of rays a GPU can "cast" into a given scene, to create the data needed for the realistic lighting effects.
Some redditors seem very excited about a new World of Warcraft feature called Glorbo, which some believe will "make a huge impact on the game." Their palpable enthusiasm for Glorbo caught the attention of a blog named The Portal, which publishes "gaming content powered by Z League," an app that aims to bring gamers together. The Portal appears to be using AI to scrape Reddit posts and turn them into content. Redditor u/kaefer_kriegerin noticed that The Portal was seemingly turning discussions from some gaming subreddits into blog posts. They decided to try and trick the content farm into covering a fake WoW feature. The ruse was a success.
As an important class of SNNs, recurrent spiking neural networks (RSNNs) possess great computational power. However, the practical application of RSNNs is severely limited by challenges in training. Biologically-inspired unsupervised learning has limited capability in boosting the performance of RSNNs. On the other hand, existing backpropagation (BP) methods suffer from high complexity of unfolding in time, vanishing and exploding gradients, and approximate differentiation of discontinuous spiking activities when applied to RSNNs. To enable supervised training of RSNNs under a well-defined loss function, we present a novel Spike-Train level RSNNs Backpropagation (ST-RSBP) algorithm for training deep RSNNs. The proposed ST-RSBP directly computes the gradient of a rate-coded loss function defined at the output layer of the network w.r.t tunable parameters. The scalability of ST-RSBP is achieved by the proposed spike-train level computation during which temporal effects of the SNN is captured in both the forward and backward pass of BP. Our ST-RSBP algorithm can be broadly applied to RSNNs with a single recurrent layer or deep RSNNs with multiple feedforward and recurrent layers. Based upon challenging speech and image datasets including TI46 [25], N-TIDIGITS [3], Fashion-MNIST [40] and MNIST, ST-RSBP is able to train SNNs with an accuracy surpassing that of the current state-of-the-art SNN BP algorithms and conventional non-spiking deep learning models.
Solving optimization problems with unknown parameters often requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values. Recent work has shown that including the optimization problem as a layer in the model training pipeline results in predictions of the unobserved parameters that lead to higher decision quality. Unfortunately, this process comes at a large computational cost because the optimization problem must be solved and differentiated through in each training iteration; furthermore, it may also sometimes fail to improve solution quality due to non-smoothness issues that arise when training through a complex optimization layer. To address these shortcomings, we learn a low-dimensional surrogate model of a large optimization problem by representing the feasible space in terms of meta-variables, each of which is a linear combination of the original variables. By training a low-dimensional surrogate model end-to-end, and jointly with the predictive model, we achieve: i) a large reduction in training and inference time; and ii) improved performance by focusing attention on the more important variables in the optimization and learning in a smoother space. Empirically, we demonstrate these improvements on a non-convex adversary modeling task, a submodular recommendation task and a convex portfolio optimization task.
The interplay between exploration and exploitation in competitive multi-agent learning is still far from being well understood. Motivated by this, we study smooth Q-learning, a prototypical learning model that explicitly captures the balance between game rewards and exploration costs. We show that Q-learning always converges to the unique quantal-response equilibrium (QRE), the standard solution concept for games under bounded rationality, in weighted zero-sum polymatrix games with heterogeneous learning agents using positive exploration rates. Complementing recent results about convergence in weighted potential games [16, 34], we show that fast convergence of Q-learning in competitive settings obtains regardless of the number of agents and without any need for parameter fine-tuning. As showcased by our experiments in network zero-sum games, these theoretical results provide the necessary guarantees for an algorithmic approach to the currently open problem of equilibrium selection in competitive multi-agent settings.
We propose a Safe Pontryagin Differentiable Programming (Safe PDP) methodology, which establishes a theoretical and algorithmic framework to solve a broad class of safety-critical learning and control tasks--problems that require the guarantee of safety constraint satisfaction at any stage of the learning and control progress. In the spirit of interior-point methods, Safe PDP handles different types of system constraints on states and inputs by incorporating them into the cost or loss through barrier functions. We prove three fundamentals of the proposed Safe PDP: first, both the solution and its gradient in the backward pass can be approximated by solving their more efficient unconstrained counterparts; second, the approximation for both the solution and its gradient can be controlled for arbitrary accuracy by a barrier parameter; and third, importantly, all intermediate results throughout the approximation and optimization strictly respect the constraints, thus guaranteeing safety throughout the entire learning and control process. We demonstrate the capabilities of Safe PDP in solving various safety-critical tasks, including safe policy optimization, safe motion planning, and learning MPCs from demonstrations, on different challenging systems such as 6-DoF maneuvering quadrotor and 6-DoF rocket powered landing.
Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus. Autoregressive language models are emerging as the de-facto standard for generating answers, with newer and more powerful systems emerging at an astonishing pace. In this paper we argue that all this (and future) progress can be directly applied to the retrieval problem with minimal intervention to the models' architecture. Previous work has explored ways to partition the search space into hierarchical structures and retrieve documents by autoregressively generating their unique identifier. In this work we propose an alternative that doesn't force any structure in the search space: using all ngrams in a passage as its possible identifiers. This setup allows us to use an autoregressive model to generate and score distinctive ngrams, that are then mapped to full passages through an efficient data structure. Empirically, we show this not only outperforms prior autoregressive approaches but also leads to an average improvement of at least 10 points over more established retrieval solutions for passage-level retrieval on the KILT benchmark, establishing new stateof-the-art downstream performance on some datasets, while using a considerably lighter memory footprint than competing systems.
Welcome to insideBIGDATA's annual technology predictions round-up! The big data industry has significant inertia moving into 2023. In order to give our valued readers a pulse on important new trends leading into next year, we here at insideBIGDATA heard from all our friends across the vendor ecosystem to get their insights, reflections and predictions for what may be coming. We were very encouraged to hear such exciting perspectives. Even if only half actually come true, Big Data in the next year is destined to be quite an exciting ride. There are many reasons why a customer would choose to implement their architecture on multiple clouds whether it's technology, market, or business-driven. When this happens, many times this leads to transactional and operational data being stored on multiple cloud platforms. The challenge this brings is how to gain insight into these without resorting to implementing multiple disparate data platforms. Historically data virtualization tools have been ...
On a cloudy Christmas morning last year, a rocket carrying the most powerful space telescope ever built blasted off from a launchpad in French Guiana. After reaching its destination in space about a month later, the James Webb Space Telescope (JWST) began sending back sparkling presents to humanity--jaw-dropping images that are revealing our universe in stunning new ways. Every year since 1988, Popular Science has highlighted the innovations that make living on Earth even a tiny bit better. And this year--our 35th--has been remarkable, thanks to the successful deployment of the JWST, which earned our highest honor as the Innovation of the Year. But it's just one item out of the 100 stellar technological accomplishments our editors have selected to recognize. The list below represents months of research, testing, discussion, and debate. It celebrates exciting inventions that are improving our lives in ways both big and small. These technologies and discoveries are teaching us about the ...