Goto

Collaborating Authors

 deadline



Fair Scheduling for Time-dependent Resources

Neural Information Processing Systems

We study a fair resource scheduling problem, where a set of interval jobs are to be allocated to heterogeneous machines controlled by intellectual agents.Each job is associated with release time, deadline, and processing time such that it can be processed if its complete processing period is between its release time and deadline. The machines gain possibly different utilities by processing different jobs, and all jobs assigned to the same machine should be processed without overlap.We consider two widely studied solution concepts, namely, maximin share fairness and envy-freeness.For both criteria, we discuss the extent to which fair allocations exist and present constant approximation algorithms for various settings.


Holiday shipping deadlines: When to ship your gifts this year so they arrive on time

Los Angeles Times

Things to Do in L.A. Tap to enable a layout that focuses on the article. Mail handlers use long tools to drag packages out of a bin onto a conveyer belt for sorting at the Los Angeles Processing & Distribution Center. This is read by an automated voice. Please report any issues or inconsistencies here . There is still time to get your packages and gifts shipped to friends and loved ones for the holiday season, but you need to hurry.


CNN-Enabled Scheduling for Probabilistic Real-Time Guarantees in Industrial URLLC

Alqudah, Eman, Khokhar, Ashfaq

arXiv.org Artificial Intelligence

Ensuring packet-level communication quality is vital for ultra-reliable, low-latency communications (URLLC) in large-scale industrial wireless networks. We enhance the Local Deadline Partition (LDP) algorithm by introducing a CNN-based dynamic priority prediction mechanism for improved interference coordination in multi-cell, multi-channel networks. Unlike LDP's static priorities, our approach uses a Convolutional Neural Network and graph coloring to adaptively assign link priorities based on real-time traffic, transmission opportunities, and network conditions. Assuming that first training phase is performed offline, our approach introduced minimal overhead, while enabling more efficient resource allocation, boosting network capacity, SINR, and schedulability. Simulation results show SINR gains of up to 113\%, 94\%, and 49\% over LDP across three network configurations, highlighting its effectiveness for complex URLLC scenarios.


Accelerating Probabilistic Response-Time Analysis: Revised Critical Instant and Optimized Convolution

Takahashi, Hiroto, Yano, Atsushi, Azumi, Takuya

arXiv.org Artificial Intelligence

Accurate estimation of the Worst-Case Deadline Failure Probability (WCDFP) has attracted growing attention as a means to provide safety assurances in complex systems such as robotic platforms and autonomous vehicles. WCDFP quantifies the likelihood of deadline misses under the most pessimistic operating conditions, and safe estimation is essential for dependable real-time applications. However, achieving high accuracy in WCDFP estimation often incurs significant computational cost. Recent studies have revealed that the classical assumption of the critical instant, the activation pattern traditionally considered to trigger the worst-case behavior, can lead to underestimation of WCDFP in probabilistic settings. This observation motivates the use of a revised critical instant formulation that more faithfully captures the true worst-case scenario. This paper investigates convolution-based methods for WCDFP estimation under this revised setting and proposes an optimization technique that accelerates convolution by improving the merge order. Extensive experiments with diverse execution-time distributions demonstrate that the proposed optimized Aggregate Convolution reduces computation time by up to an order of magnitude compared to Sequential Convolution, while retaining accurate and safe-sided WCDFP estimates. These results highlight the potential of the approach to provide both efficiency and reliability in probabilistic timing analysis for safety-critical real-time applications.


Bridging Planning and Execution: Multi-Agent Path Finding Under Real-World Deadlines

Yan, Jingtian, Zhou, Shuai, Smith, Stephen F., Li, Jiaoyang

arXiv.org Artificial Intelligence

Abstract--The Multi-Agent Path Finding (MAPF) problem aims to find collision-free paths for multiple agents while optimizing objectives such as the sum of costs or makespan. MAPF has wide applications in domains like automated warehouses, manufacturing systems, and airport logistics. However, most MAPF formulations assume a simplified robot model for planning, which overlooks execution-time factors such as kinodynamic constraints, communication latency, and controller variability. This gap between planning and execution is problematic for time-sensitive applications. T o bridge this gap, we propose REMAP, an execution-informed MAPF planning framework that can be combined with leading search-based MAPF planners with minor changes. Our framework integrates the proposed ExecTimeNet to accurately estimate execution time based on planned paths. We demonstrate our method for solving MAPF with Real-world Deadlines (MAPF-RD) problem, where agents must reach their goals before a predefined wall-clock time. We integrate our framework with two popular MAPF methods, MAPF-LNS and CBS. Experiments show that REMAP achieves up to 20% improvement in solution quality over baseline methods (e.g., constant execution speed estimators) on benchmark maps with up to 300 agents. The Multi-Agent Path Finding (MAPF) problem seeks to find collision-free paths for multiple agents in a shared environment while optimizing objectives such as the sum of costs or makespan. This problem is a fundamental challenge in settings such as automated warehouses [1], traffic intersections [2], and airport logistics [3]. State-of-the-art MAPF methods can efficiently coordinate hundreds of agents, making MAPF a promising solution for these domains.


Balancing Suspense and Surprise: Timely Decision Making with Endogenous Information Acquisition

Ahmed M. Alaa, Mihaela Van Der Schaar

Neural Information Processing Systems

We develop a Bayesian model for decision-making under time p ressure with endogenous information acquisition. In our model, the decisi on-maker decides when to observe (costly) information by sampling an underlying c ontinuous-time stochastic process (time series) that conveys informa tion about the potential occurrence/non-occurrence of an adverse event which will t erminate the decision-making process. In her attempt to predict the occurrence of t he adverse event, the decision-maker follows a policy that determines when to acquire information from the time series (continuation), and when to stop acquiring information and make a final prediction (stopping). We show that the optimal polic y has a " rendezvous" structure, i.e. a structure in which whenever a new informat ion sample is gathered from the time series, the optimal "date" for acquiring the ne xt sample becomes computable. The optimal interval between two information s amples balances a trade-off between the decision maker's "surprise", i.e. th e drift in her posterior belief after observing new information, and "suspense", i. e. the probability that the adverse event occurs in the time interval between two inf ormation samples. Moreover, we characterize the continuation and stopping re gions in the decision-maker's state-space, and show that they depend not only on th e decision-maker's beliefs, but also on the "context", i.e. the current realiza tion of the time series.


A Switching Framework for Online Interval Scheduling with Predictions

Antoniadis, Antonios, Shahheidar, Ali, Shahkarami, Golnoosh, Soltani, Abolfazl

arXiv.org Artificial Intelligence

We study online interval scheduling in the irrevocable setting, where each interval must be immediately accepted or rejected upon arrival. The objective is to maximize the total length of accepted intervals while ensuring that no two accepted intervals overlap. We consider this problem in a learning-augmented setting, where the algorithm has access to (machine-learned) predictions. The goal is to design algorithms that leverage these predictions to improve performance while maintaining robust guarantees in the presence of prediction errors. Our main contribution is the SemiTrust-and-Switch framework, which provides a unified approach for combining prediction-based and classical interval scheduling algorithms. This framework applies to both deterministic and randomized algorithms and captures the trade-off between consistency (performance under accurate predictions) and robustness (performance under adversarial inputs). Moreover, we provide lower bounds, proving the tightness of this framework in particular settings. We further design a randomized algorithm that smoothly interpolates between prediction-based and robust algorithms. This algorithm achieves both robustness and smoothness--its performance degrades gracefully with the quality of the prediction.



Hotel adverts banned over misleadingly cheap rooms

BBC News

Adverts by four of Britain's biggest hotel and travel firms have been banned for stating misleading minimum prices for rooms. The Advertising Standards Authority (ASA) upheld complaints against the Hilton hotel group, Travelodge, Booking.com and Accor over their use of eye-catching so-called from prices. The watchdog found only a small number of rooms actually available to book at the promoted price and concluded the adverts overstated the deals. It said this was unfair on those looking for good deals or seeking to make informed choices about where to book. ASA operations manager Emily Henwood said: Advertised prices must match what's really available.