Goto

Collaborating Authors

 slack


Learning (Approximately) Equivariant Networks via Constrained Optimization

Manolache, Andrei, Chamon, Luiz F. O., Niepert, Mathias

arXiv.org Artificial Intelligence

Equivariant neural networks are designed to respect symmetries through their architecture, boosting generalization and sample efficiency when those symmetries are present in the data distribution. Real-world data, however, often departs from perfect symmetry because of noise, structural variation, measurement bias, or other symmetry-breaking effects. Strictly equivariant models may struggle to fit the data, while unconstrained models lack a principled way to leverage partial symmetries. Even when the data is fully symmetric, enforcing equivariance can hurt training by limiting the model to a restricted region of the parameter space. Guided by homotopy principles, where an optimization problem is solved by gradually transforming a simpler problem into a complex one, we introduce Adaptive Constrained Equivariance (ACE), a constrained optimization approach that starts with a flexible, non-equivariant model and gradually reduces its deviation from equivariance. This gradual tightening smooths training early on and settles the model at a data-driven equilibrium, balancing between equivariance and non-equivariance. Across multiple architectures and tasks, our method consistently improves performance metrics, sample efficiency, and robustness to input perturbations compared with strictly equivariant models and heuristic equivariance relaxations.


OpenAI Hires Slack CEO as New Chief Revenue Officer

WIRED

A memo obtained by WIRED confirms Denise Dresser's departure from Slack. She is now headed to OpenAI. Slack CEO Denise Dresser is leaving the company and joining OpenAI as the company's chief revenue officer, multiple sources tell WIRED. Marc Benioff, the chief executive of Salesforce, which owns Slack, shared news of Dresser's departure in a message to staff on Monday evening. At OpenAI, Dresser will manage the company's enterprise unit, which has been growing rapidly this year.


MURMUR: Using cross-user chatter to break collaborative language agents in groups

Patlan, Atharv Singh, Sheng, Peiyao, Hebbar, S. Ashwin, Mittal, Prateek, Viswanath, Pramod

arXiv.org Artificial Intelligence

Language agents are rapidly expanding from single-user assistants to multi-user collaborators in shared workspaces and groups. However, today's language models lack a mechanism for isolating user interactions and concurrent tasks, creating a new attack vector inherent to this new setting: cross-user poisoning (CUP). In a CUP attack, an adversary injects ordinary-looking messages that poison the persistent, shared state, which later triggers the agent to execute unintended, attacker-specified actions on behalf of benign users. We validate CUP on real systems, successfully attacking popular multi-user agents. To study the phenomenon systematically, we present MURMUR, a framework that composes single-user tasks into concurrent, group-based scenarios using an LLM to generate realistic, history-aware user interactions. We observe that CUP attacks succeed at high rates and their effects persist across multiple tasks, thus posing fundamental risks to multi-user LLM deployments. Finally, we introduce a first-step defense with task-based clustering to mitigate this new class of vulnerability


OpenAI Locks Down San Francisco Offices Following Alleged Threat From Activist

WIRED

A message on OpenAI's internal Slack claimed the activist in question had expressed interest in "causing physical harm to OpenAI employees." OpenAI employees in San Francisco were told to stay inside the office on Friday afternoon after the company purportedly received a threat from an individual who was previously associated with the Stop AI activist group. "Our information indicates that [name] from StopAI has expressed interest in causing physical harm to OpenAI employees," a member of the internal communications team wrote on Slack. "He has previously been on site at our San Francisco facilities." Just before 11 am, San Francisco police received a 911 call about a man allegedly making threats and intending to harm others at 550 Terry Francois Boulevard, which is near OpenAI's offices in the Mission Bay neighborhood, according to data tracked by the crime app Citizen.





A General Incentives-Based Framework for Fairness in Multi-agent Resource Allocation

Kumar, Ashwin, Yeoh, William

arXiv.org Artificial Intelligence

We introduce the General Incentives-based Framework for Fairness (GIFF), a novel approach for fair multi-agent resource allocation that infers fair decision-making from standard value functions. In resource-constrained settings, agents optimizing for efficiency often create inequitable outcomes. Our approach leverages the action-value (Q-)function to balance efficiency and fairness without requiring additional training. Specifically, our method computes a local fairness gain for each action and introduces a counterfactual advantage correction term to discourage over-allocation to already well-off agents. This approach is formalized within a centralized control setting, where an arbitrator uses the GIFF-modified Q-values to solve an allocation problem. Empirical evaluations across diverse domains, including dynamic ridesharing, homelessness prevention, and a complex job allocation task-demonstrate that our framework consistently outperforms strong baselines and can discover far-sighted, equitable policies. The framework's effectiveness is supported by a theoretical foundation; we prove its fairness surrogate is a principled lower bound on the true fairness improvement and that its trade-off parameter offers monotonic tuning. Our findings establish GIFF as a robust and principled framework for leveraging standard reinforcement learning components to achieve more equitable outcomes in complex multi-agent systems.


Algorithms for dynamic scheduling in manufacturing, towards digital factories Improving Deadline Feasibility and Responsiveness via Temporal Networks

Hedea, Ioan

arXiv.org Artificial Intelligence

Modern manufacturing systems must meet hard delivery deadlines while coping with stochastic task durations caused by process noise, equipment variability, and human intervention. Traditional deterministic schedules break down when reality deviates from nominal plans, triggering costly last-minute repairs. This thesis combines offline constraint-programming (CP) optimisation with online temporal-network execution to create schedules that remain feasible under worst-case uncertainty. First, we build a CP model of the flexible job-shop with per-job deadline tasks and insert an optimal buffer $Δ^*$ to obtain a fully pro-active baseline. We then translate the resulting plan into a Simple Temporal Network with Uncertainty (STNU) and verify dynamic controllability, which guarantees that a real-time dispatcher can retime activities for every bounded duration realisation without violating resource or deadline constraints. Extensive Monte-Carlo simulations on the open Kacem~1--4 benchmark suite show that our hybrid approach eliminates 100\% of deadline violations observed in state-of-the-art meta-heuristic schedules, while adding only 3--5\% makespan overhead. Scalability experiments confirm that CP solve-times and STNU checks remain sub-second on medium-size instances. The work demonstrates how temporal-network reasoning can bridge the gap between proactive buffering and dynamic robustness, moving industry a step closer to truly digital, self-correcting factories.