Services
Enhancing LLM's Cognition via Structurization
When reading long-form text, human cognition is complex and structurized. While large language models (LLMs) process input contexts through a causal and sequential perspective, this approach can potentially limit their ability to handle intricate and complex inputs effectively. To enhance LLM's cognition capability, this paper presents a novel concept of context structurization. Specifically, we transform the plain, unordered contextual sentences into well-ordered and hierarchically structurized elements. By doing so, LLMs can better grasp intricate and extended contexts through precise attention and information-seeking along the organized structures. Extensive evaluations are conducted across various model architectures and sizes (including a series of auto-regressive LLMs as well as BERT-like masking models) on a diverse set of NLP tasks (e.g., context-based question-answering, exhaustive hallucination evaluation, and passage-level dense retrieval). Empirical results show consistent and significant performance gains afforded by a singleround structurization. In particular, we boost the open-sourced LLaMA2-70B model to achieve comparable performance against GPT-3.5-Turbo
Off to new Shores: A Dataset & Benchmark for (near-)coastal Flood Inundation Forecasting
Floods are among the most common and devastating natural hazards, imposing immense costs on our society and economy due to their disastrous consequences. Recent progress in weather prediction and spaceborne flood mapping demonstrated the feasibility of anticipating extreme events and reliably detecting their catastrophic effects afterwards. However, these efforts are rarely linked to one another and there is a critical lack of datasets and benchmarks to enable the direct forecasting of flood extent. To resolve this issue, we curate a novel dataset enabling a timely prediction of flood extent. Furthermore, we provide a representative evaluation of state-of-the-art methods, structured into two benchmark tracks for forecasting flood inundation maps i) in general and ii) focused on coastal regions. Altogether, our dataset and benchmark provide a comprehensive platform for evaluating flood forecasts, enabling future solutions for this critical challenge. Data, code & models are shared at https://github.com/Multihuntr/GFF
Balance Risk and Reward: A Batched-Bandit Strategy for Automated Phased Release
Phased releases are a common strategy in the technology industry for gradually releasing new products or updates through a sequence of A/B tests in which the number of treated units gradually grows until full deployment or deprecation. Performing phased releases in a principled way requires selecting the proportion of units assigned to the new release in a way that balances the risk of an adverse effect with the need to iterate and learn from the experiment rapidly. In this paper, we formalize this problem and propose an algorithm that automatically determines the release percentage at each stage in the schedule, balancing the need to control risk while maximizing ramp-up speed. Our framework models the challenge as a constrained batched bandit problem that ensures that our pre-specified experimental budget is not depleted with high probability. Our proposed algorithm leverages an adaptive Bayesian approach in which the maximal number of units assigned to the treatment is determined by the posterior distribution, ensuring that the probability of depleting the remaining budget is low. Notably, our approach analytically solves the ramp sizes by inverting probability bounds, eliminating the need for challenging rare-event Monte Carlo simulation. It only requires computing means and variances of outcome subsets, making it highly efficient and parallelizable.
SustainDC: Benchmarking for Sustainable Data Center Control, Ricardo Luna
Machine learning has driven an exponential increase in computational demand, leading to massive data centers that consume significant energy and contribute to climate change. This makes sustainable data center control a priority. In this paper, we introduce SustainDC, a set of Python environments for benchmarking multiagent reinforcement learning (MARL) algorithms for data centers (DC). SustainDC supports custom DC configurations and tasks such as workload scheduling, cooling optimization, and auxiliary battery management, with multiple agents managing these operations while accounting for the effects of each other. We evaluate various MARL algorithms on SustainDC, showing their performance across diverse DC designs, locations, weather conditions, grid carbon intensity, and workload requirements. Our results highlight significant opportunities to improve data center operations using MARL algorithms. Given the increasing use of DC due to AI, SustainDC provides a crucial platform for developing and benchmarking advanced algorithms essential for achieving sustainable computing and addressing other heterogeneous real-world challenges.
Voxel Mamba: Group-Free State Space Models for Point Cloud based 3D Object Detection
Serialization-based methods, which serialize the 3D voxels and group them into multiple sequences before inputting to Transformers, have demonstrated their effectiveness in 3D object detection. However, serializing 3D voxels into 1D sequences will inevitably sacrifice the voxel spatial proximity. Such an issue is hard to be addressed by enlarging the group size with existing serializationbased methods due to the quadratic complexity of Transformers with feature sizes. Inspired by the recent advances of state space models (SSMs), we present a Voxel SSM, termed as Voxel Mamba, which employs a group-free strategy to serialize the whole space of voxels into a single sequence. The linear complexity of SSMs encourages our group-free design, alleviating the loss of spatial proximity of voxels. To further enhance the spatial proximity, we propose a Dual-scale SSM Block to establish a hierarchical structure, enabling a larger receptive field in the 1D serialization curve, as well as more complete local regions in 3D space. Moreover, we implicitly apply window partition under the group-free framework by positional encoding, which further enhances spatial proximity by encoding voxel positional information. Our experiments on Waymo Open Dataset and nuScenes dataset show that Voxel Mamba not only achieves higher accuracy than state-of-the-art methods, but also demonstrates significant advantages in computational efficiency.
Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias Yue Yu
Large language models (LLMs) have been recently leveraged as training data generators for various natural language processing (NLP) tasks. While previous research has explored different approaches to training models using generated data, they generally rely on simple class-conditional prompts, which may limit the diversity of the generated data and inherit systematic biases of LLM. Thus, we investigate training data generation with diversely attributed prompts (e.g., specifying attributes like length and style), which have the potential to yield diverse and attributed generated data. Our investigation focuses on datasets with high cardinality and diverse domains, wherein we demonstrate that attributed prompts outperform simple class-conditional prompts in terms of the resulting model's performance.
Interpolating Item and User Fairness in Multi-Sided Recommendations Qinyi Chen 1 Jason Cheuk Nam Liang 1
Today's online platforms heavily lean on algorithmic recommendations for bolstering user engagement and driving revenue. However, these recommendations can impact multiple stakeholders simultaneously--the platform, items (sellers), and users (customers)--each with their unique objectives, making it difficult to find the right middle ground that accommodates all stakeholders.
I tried Googles new Try it on AI shopping tool. Im equally impressed and mortified.
At Google I/O 2025, the tech company announced a ton of new AI features, and one of the most interesting is a virtual clothing try-on tool. The Google Shopping "Try it on" feature lets users upload a photo of themselves and then virtually try on clothes, basically the IRL version of the Clueless closet millennials have been dreaming about since 1995. Or, as Mashable Shopping Reporter Haley Henschel put it, "Google's latest shopping feature makes Cher Horowitz's computerized closet a reality." Almost as soon as the feature was released, users started trying to "jailbreak" the tool, which is becoming a fun little tradition for tech writers every time a new AI model or tool is released. On Friday, The Atlantic reported that "Google's new AI shopping tool appears eager to give J.D. Vance breasts."
This Google Chrome update could change the fundamentals of browsing - here's who gets to try it first
Google's Chrome browser for MacOS and Windows is receiving an infusion of new Gemini-powered capabilities, including an AI browsing assistant contextually sensitized to a user's browsing activities. Google made the announcement this week at Google I/O 2025. Dubbed Gemini-in-Chrome, the feature will be available May 21 to Google AI Pro and Google AI Ultra subscribers in the US as well as Chrome Beta, Dev, and Canary users. The general idea behind Gemini-in-Chrome is to reorganize, aggregate, and then more sensibly redisplay the data found on one or more browser tabs while also embellishing the final output with additional but relevant Gemini-generated information. For example, during a pre-event press briefing attended by ZDNET, Google director of Chrome product management Charmaine D'Silva demonstrated how Gemini-in-Chrome could not only organize a head-to-head feature comparison chart of individual sleeping bags -- to which multiple Chrome tabs (one tab per sleeping bag) were pointing -- but could respond to text prompts about each bag's suitability to the expected temperatures for an upcoming camping trip in Maine.
Revisiting 3D Object Detection From an Egocentric Perspective Boyang Deng Charles R. Qi Thomas Funkhouser
For such applications, we care the most about how the detections impact the ego-agent's behavior and safety (the egocentric perspective). Intuitively, we seek more accurate descriptions of object geometry when it's more likely to interfere with the ego-agent's motion trajectory. However, current detection metrics, based on box Intersection-over-Union (IoU), are object-centric and are not designed to capture the spatio-temporal relationship between objects and the ego-agent. To address this issue, we propose a new egocentric measure to evaluate 3D object detection: Support Distance Error (SDE). Our analysis based on SDE reveals that the egocentric detection quality is bounded by the coarse geometry of the bounding boxes. Given the insight that SDE can be improved by more accurate geometry descriptions, we propose to represent objects as amodal contours, specifically amodal star-shaped polygons, and devise a simple model, StarPoly, to predict such contours. Our experiments on the large-scale Waymo Open Dataset show that SDE better reflects the impact of detection quality on the ego-agent's safety compared to IoU; and the estimated contours from StarPoly consistently improve the egocentric detection quality over recent 3D object detectors.