Information Technology
NVIDIA and Bolt team up for European robotaxis
The companies haven't yet announced a timeline. At GTC 2026, NVIDIA and Bolt announced what they hope will be a symbiotic partnership. Bolt gets NVIDIA technology that would be costly and impractical to build on its own. Meanwhile, NVIDIA not only gains a major customer but also access to the European rideshare company's driving data. Bolt says its fleet data will build a learning engine for autonomous vehicles (AVs) using NVIDIA tech.
- Information Technology > Hardware (1.00)
- Transportation > Ground > Road (0.42)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (0.73)
Distributed Learning without Distress: Privacy-Preserving Empirical Risk Minimization
Distributed learning allows a group of independent data owners to collaboratively learn a model over their data sets without exposing their private data. We present a distributed learning approach that combines differential privacy with secure multi-party computation. We explore two popular methods of differential privacy, output perturbation and gradient perturbation, and advance the state-of-the-art for both methods in the distributed learning setting. In our output perturbation method, the parties combine local models within a secure computation and then add the required differential privacy noise before revealing the model. In our gradient perturbation method, the data owners collaboratively train a global model via an iterative learning algorithm. At each iteration, the parties aggregate their local gradients within a secure computation, adding sufficient noise to ensure privacy before the gradient updates are revealed. For both methods, we show that the noise can be reduced in the multi-party setting by adding the noise inside the secure computation after aggregation, asymptotically improving upon the best previous results. Experiments on real world data sets demonstrate that our methods provide substantial utility gains for typical privacy requirements.
Bayesian Adversarial Learning
Deep neural networks have been known to be vulnerable to adversarial attacks, raising lots of security concerns in the practical deployment. Popular defensive approaches can be formulated as a (distributionally) robust optimization problem, which minimizes a ``point estimate'' of worst-case loss derived from either per-datum perturbation or adversary data-generating distribution within certain pre-defined constraints. This point estimate ignores potential test adversaries that are beyond the pre-defined constraints. The model robustness might deteriorate sharply in the scenario of stronger test adversarial data. In this work, a novel robust training framework is proposed to alleviate this issue, Bayesian Robust Learning, in which a distribution is put on the adversarial data-generating distribution to account for the uncertainty of the adversarial data-generating process.
A Block Coordinate Ascent Algorithm for Mean-Variance Optimization
Risk management in dynamic decision problems is a primary concern in many fields, including financial investment, autonomous driving, and healthcare. The mean-variance function is one of the most widely used objective functions in risk management due to its simplicity and interpretability. Existing algorithms for mean-variance optimization are based on multi-time-scale stochastic approximation, whose learning rate schedules are often hard to tune, and have only asymptotic convergence proof. In this paper, we develop a model-free policy search framework for mean-variance optimization with finite-sample error bound analysis (to local optima). Our starting point is a reformulation of the original mean-variance function with its Fenchel dual, from which we propose a stochastic block coordinate ascent policy search algorithm. Both the asymptotic convergence guarantee of the last iteration's solution and the convergence rate of the randomly picked solution are provided, and their applicability is demonstrated on several benchmark domains.
NVIDIA claims DLSS 5 will deliver 'photoreal' image quality with AI this fall
NVIDIA claims DLSS 5 will deliver'photoreal' image quality with AI this fall The company plans to rely on AI for more than just additional frames. Just months after announcing DLSS 4.5 at CES, NVIDIA has unveiled its next major upscaling technology, DLSS 5. The company is doubling-down on AI for this next iteration, claiming DLSS 5 "infuses pixels with photoreal lighting and materials" using a real-time neural rendering model when it arrives this fall. So what does this mean in practice? In an on-stage demo at NVIDIA's GTC 2026 keynote, CEO Jensen Huang showed off the technology with and DLSS 5 adds a noticeable amount of detail to character's hair and skin tone, but it also appears it's being compared to those games without any DLSS features turned on.
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Artificial Intelligence (1.00)
Inferring Networks From Random Walk-Based Node Similarities
Digital presence in the world of online social media entails significant privacy risks. In this work we consider a privacy threat to a social network in which an attacker has access to a subset of random walk-based node similarities, such as effective resistances (i.e., commute times) or personalized PageRank scores. Using these similarities, the attacker seeks to infer as much information as possible about the network, including unknown pairwise node similarities and edges. For the effective resistance metric, we show that with just a small subset of measurements, one can learn a large fraction of edges in a social network. We also show that it is possible to learn a graph which accurately matches the underlying network on all other effective resistances.
Efficient Formal Safety Analysis of Neural Networks
Neural networks are increasingly deployed in real-world safety-critical domains such as autonomous driving, aircraft collision avoidance, and malware detection. However, these networks have been shown to often mispredict on inputs with minor adversarial or even accidental perturbations. Consequences of such errors can be disastrous and even potentially fatal as shown by the recent Tesla autopilot crash. Thus, there is an urgent need for formal analysis systems that can rigorously check neural networks for violations of different safety properties such as robustness against adversarial perturbations within a certain L-norm of a given image. An effective safety analysis system for a neural network must be able to either ensure that a safety property is satisfied by the network or find a counterexample, i.e., an input for which the network will violate the property. Unfortunately, most existing techniques for performing such analysis struggle to scale beyond very small networks and the ones that can scale to larger networks suffer from high false positives and cannot produce concrete counterexamples in case of a property violation. In this paper, we present a new efficient approach for rigorously checking different safety properties of neural networks that significantly outperforms existing approaches by multiple orders of magnitude. Our approach can check different safety properties and find concrete counterexamples for networks that are 10x larger than the ones supported by existing analysis techniques. We believe that our approach to estimating tight output bounds of a network for a given input range can also help improve the explainability of neural networks and guide the training process of more robust neural networks.
- Transportation (0.59)
- Information Technology > Security & Privacy (0.59)
Tech companies are teaming up to combat scammers
The Online Services Accord Against Scams was signed by major tech companies including Google, Microsoft and OpenAI. A coalition of Big Tech companies is working on a more comprehensive solution to combat online scams . As first reported by, Google, Microsoft, LinkedIn, Meta, Amazon, OpenAI, Adobe and Match Group announced the signing of the Online Services Accord Against Scams. The new agreement is meant to put up a united industry-wide front against online fraud and scams, particularly those from sophisticated criminal networks that use multiple platforms. According to the report, the measures will include adding fraud detection tools, introducing new user security features, and requiring more robust verification for financial transactions.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military > Cyberwarfare (0.41)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Communications > Mobile (1.00)
- Information Technology > Communications > Social Media (0.92)
- (3 more...)
AI is nearly exclusively designed by men – here's how to fix it
AI is nearly exclusively designed by men - here's how to fix it With the Trump administration's attacks on so-called woke AI it is becoming even harder to make the technology we use fairer and more diverse. It's day two of the conference at the Royal Society in London, but I'm finding it increasingly hard to concentrate on the speakers because my AI transcription software - which is supposed to make my life easier - keeps insisting on mistyping someone's name. The irony isn't lost on me: this is the session about artificial intelligence, and specifically about how women are being erased from the latest AI technologies. This is much bigger than the now-familiar idea that AI algorithms carry the biases of the datasets they are trained on, including gender bias. Instead, the focus of the conference session, chaired by computer scientist Wendy Hall, is seeking to address a more fundamental issue: the fact that new AI technologies, which will have a transformative effect on all of society, are being designed almost exclusively by men.
- North America > United States > California (0.06)
- North America > United States > New Hampshire (0.05)
- Europe > United Kingdom (0.05)
- Health & Medicine (1.00)
- Government > Regional Government > North America Government > United States Government (0.91)
- Information Technology (0.71)
- Asia > Middle East > Iran (0.14)
- North America > The Bahamas (0.14)
- North America > Canada > Alberta (0.14)
- (17 more...)
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
- Information Technology (1.00)
- (4 more...)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Communications > Mobile (0.94)
- Information Technology > Artificial Intelligence (0.68)