Goto

Collaborating Authors

 Country


What message does Ukraine's Operation Spider's Web send to Russia and US?

Al Jazeera

What message does Ukraine's Operation Spider's Web send to Russia and US? Ukraine carries out large-scale drone strikes on multiple Russian airbases.Read more Eighteen months in the making, Ukraine's Operation Spider's Web saw hundreds of AI-trained drones target military aircraft deep inside Russia's borders. Ukrainian President Volodymyr Zelenskyy says Sunday's attacks will go down in history. He followed them up with a proposal for an unconditional ceasefire as the two sides met in Istanbul. The European Union is preparing its 18th package of sanctions on Russia, while US President Donald Trump has threatened to use "devastating" measures against Russia if he feels the time is right. So, is the time right now?


SXSW launches first London festival with its eye fixed on AI

Mashable

Lanyard-clad attendees with branded tote bags and pink-shirted volunteers flowed through London's Brick Lane on Monday, marking the launch of the inaugural SXSW London festival. Taking place over multiple stages and venues in Shoreditch and Hoxton, SXSW London has officially kicked off its first full day of panels, keynotes, demonstrations, movie premieres, and music gigs. And luckily, Londoners are no strangers to a queue, with SXSW's penchant for long lines outside Austin venues replicated in the UK capital. Playing to the strengths of fellow conferences, the biggest topics of SXSW London are the impact of AI on essentially anything you could think of, the creator economy and online communities, and self-driving tech -- I spied a Wayve autonomous vehicle carefully navigating the pedestrian-filled Brick Lane (with a human driver behind the wheel, just in case). London mayor Sadiq Khan officially launched the festival with a speech Monday morning, championing London as "a global centre for AI investment and innovation," emphasising a focus on ethical and accessible AI development, and playing to the audience with a ChatGPT anecdote.


Bill Gates to give most of his 200bn fortune to Africa

BBC News

"I recently made a commitment that my wealth will be given away over the next 20 years. The majority of that funding will be spent on helping you address challenges here in Africa," he said in an address at the African Union (AU) headquarters. Mozambique's former First Lady Graça Machel welcomed his announcement, saying it came in a "moment of crisis". "We are counting on Mr Gates' steadfast commitment to continue walking this path of transformation alongside us," she said. The US government has cut aid to Africa, including programmes to treat patients with HIV/Aids, as part of US President Donald Trump's "America First" policy, raising concerns about the future of healthcare on the continent.


Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction

Neural Information Processing Systems

Decoding non-invasive brain recordings is pivotal for advancing our understanding of human cognition but faces challenges due to individual differences and complex neural signal representations. Traditional methods often require customized models and extensive trials, lacking interpretability in visual reconstruction tasks.


GSDF: 3DGS Meets SDF for Improved Neural Rendering and Reconstruction

Neural Information Processing Systems

Representing 3D scenes from multiview images remains a core challenge in computer vision and graphics, requiring both reliable rendering and reconstruction, which often conflicts due to the mismatched prioritization of image quality over precise underlying scene geometry. Although both neural implicit surfaces and explicit Gaussian primitives have advanced with neural rendering techniques, current methods impose strict constraints on density fields or primitive shapes, which enhances the affinity for geometric reconstruction at the sacrifice of rendering quality. To address this dilemma, we introduce GSDF, a dual-branch architecture combining 3D Gaussian Splatting (3DGS) and neural Signed Distance Fields (SDF). Our approach leverages mutual guidance and joint supervision during the training process to mutually enhance reconstruction and rendering. Specifically, our method guides the Gaussian primitives to locate near potential surfaces and accelerates the SDF convergence. This implicit mutual guidance ensures robustness and accuracy in both synthetic and real-world scenarios. Experimental results demonstrate that our method boosts the SDF optimization process to reconstruct more detailed geometry, while reducing floaters and blurry edge artifacts in rendering by aligning Gaussian primitives with the underlying geometry.


From Boltzmann Machines to Neural Networks and Back Again

Neural Information Processing Systems

Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models.


Exponential Quantum Communication Advantage in Distributed Inference and Learning

Neural Information Processing Systems

Training and inference with large machine learning models that far exceed the memory capacity of individual devices necessitates the design of distributed architectures, forcing one to contend with communication constraints. We present a framework for distributed computation over a quantum network in which data is encoded into specialized quantum states. We prove that for models within this framework, inference and training using gradient descent can be performed with exponentially less communication compared to their classical analogs, and with relatively modest overhead relative to standard gradient-based methods. We show that certain graph neural networks are particularly amenable to implementation within this framework, and moreover present empirical evidence that they perform well on standard benchmarks. To our knowledge, this is the first example of exponential quantum advantage for a generic class of machine learning problems that hold regardless of the data encoding cost. Moreover, we show that models in this class can encode highly nonlinear features of their inputs, and their expressivity increases exponentially with model depth. We also delineate the space of models for which exponential communication advantages hold by showing that they cannot hold for linear classification. Communication of quantum states that potentially limit the amount of information that can be extracted from them about the data and model parameters may also lead to improved privacy guarantees for distributed computation. Taken as a whole, these findings form a promising foundation for distributed machine learning over quantum networks.


Design from Policies: Conservative Test-Time Adaptation for Offline Policy Optimization Zifeng Zhuang 1,2

Neural Information Processing Systems

Specifically, this non-iterative paradigm allows us to conduct inner-level optimization (value estimation) in training, while performing outer-level optimization (policy extraction) in testing. Naturally, such a paradigm raises three core questions that are not fully answered by prior non-iterative offline RL counterparts like rewardconditioned policy: Q1) What information should we transfer from the inner-level to the outer-level? Q2) What should we pay attention to when exploiting the transferred information for safe/confident outer-level optimization? Q3) What are the benefits of concurrently conducting outer-level optimization during testing? Motivated by model-based optimization (MBO), we propose DROP (Design fROm Policies), which fully answers the above questions. Specifically, in the inner-level, DROP decomposes offline data into multiple subsets and learns an MBO score model (A1). To keep safe exploitation to the score model in the outer-level, we explicitly learn a behavior embedding and introduce a conservative regularization (A2). During testing, we show that DROP permits test-time adaptation, enabling an adaptive inference across states (A3). Empirically, we find that DROP, compared to prior non-iterative offline RL counterparts, gains an average improvement probability of more than 80%, and achieves comparable or better performance compared to prior iterative baselines.


FairJob: A Real-World Dataset for Fairness in Online Systems

Neural Information Processing Systems

We introduce a fairness-aware dataset for job recommendation in advertising, designed to foster research in algorithmic fairness within real-world scenarios. It was collected and prepared to comply with privacy standards and business confidentiality. An additional challenge is the lack of access to protected user attributes such as gender, for which we propose a solution to obtain a proxy estimate. Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power and maintains a realistic and challenging benchmark. This dataset addresses a significant gap in the availability of fairnessfocused resources for high-impact domains like advertising - the actual impact being having access or not to precious employment opportunities, where balancing fairness and utility is a common industrial challenge. We also explore various stages in the advertising process where unfairness can occur and introduce a method to compute a fair utility metric for the job recommendations in online systems case from a biased dataset. Experimental evaluations of bias mitigation techniques on the released dataset demonstrate potential improvements in fairness and the associated trade-offs with utility.


A Unifying View of Optimism in Episodic Reinforcement Learning

Neural Information Processing Systems

In this paper we provide a general framework for designing, analyzing and implementing such algorithms in the episodic reinforcement learning problem. This framework is built upon Lagrangian duality, and demonstrates that every model-optimistic algorithm that constructs an optimistic MDP has an equivalent representation as a value-optimistic dynamic programming algorithm. Typically, it was thought that these two classes of algorithms were distinct, with model-optimistic algorithms benefiting from a cleaner probabilistic analysis while value-optimistic algorithms are easier to implement and thus more practical. With the framework developed in this paper, we show that it is possible to get the best of both worlds by providing a class of algorithms which have a computationally efficient dynamic-programming implementation and also a simple probabilistic analysis. Besides being able to capture many existing algorithms in the tabular setting, our framework can also address large-scale problems under realizable function approximation, where it enables a simple model-based analysis of some recently proposed methods.