d9731321ef4e063ebbee79298fa36f56-AuthorFeedback.pdf
Our analysis provides full distribution information on the joint outputs. A detailed comment for Reviewer #3: Thm. 2 is not difficult to derive but certainly not standard in MF theory. We could repeat our analysis for linear ฯ and show that, e.g., input correlations This is rather obvious however. We agree that we have to extend our literature discussion. Yet, its main focus is on resNets and convNets.
d921c3c762b1522c475ac8fc0811bb0f-AuthorFeedback.pdf
We wish to thank all of the reviewers for their time and thorough reading of our paper! Reviewer #1 We appreciate the reviewer's suggestions regarding clarity. Reviewer #2 Addressing the suggested improvements: (1) We have added the suggested summary sentence "the key Last barber on the and left nice was but not whohelped me Very. The start out the same as the initial tokens are identical. We also tried linearizing hidden state.
Bayesian Intermittent Demand Forecasting for Large Inventories
Matthias W. Seeger, David Salinas, Valentin Flunkert
We present a scalable and robust Bayesian method for demand forecasting in the context of a large e-commerce platform, paying special attention to intermittent and bursty target statistics. Inference is approximated by the Newton-Raphson algorithm, reduced to linear-time Kalman smoothing, which allows us to operate on several orders of magnitude larger problems than previous related work. In a study on large real-world sales datasets, our method outperforms competing approaches on fast and medium moving items.
SCaR: Refining Skill Chaining for Long-Horizon Robotic Manipulation via Dual Regularization Zixuan Chen 1 Ze Ji2 Yang Gao
Long-horizon robotic manipulation tasks typically involve a series of interrelated sub-tasks spanning multiple execution stages. Skill chaining offers a feasible solution for these tasks by pre-training the skills for each sub-task and linking them sequentially. However, imperfections in skill learning or disturbances during execution can lead to the accumulation of errors in skill chaining process, resulting in execution failures. In this paper, we investigate how to achieve stable and smooth skill chaining for long-horizon robotic manipulation tasks. Specifically, we propose a novel skill chaining framework called Skill Chaining via Dual Regularization (SCaR). This framework applies dual regularization to sub-task skill pre-training and fine-tuning, which not only enhances the intra-skill dependencies within each sub-task skill but also reinforces the inter-skill dependencies between sequential sub-task skills, thus ensuring smooth skill chaining and stable long-horizon execution. We evaluate the SCaR framework on two representative long-horizon robotic manipulation simulation benchmarks: IKEA furniture assembly and kitchen organization. Additionally, we conduct a simple real-world validation in tabletop robot pick-and-place tasks. The experimental results show that, with the support of SCaR, the robot achieves a higher success rate in long-horizon tasks compared to relevant baselines and demonstrates greater robustness to perturbations.
Bayesian Strategic Classification
In strategic classification, agents modify their features, at a cost, to obtain a positive classification outcome from the learner's classifier, typically assuming agents have full knowledge of the deployed classifier. In contrast, we consider a Bayesian setting where agents have a common distributional prior on the classifier being used and agents manipulate their features to maximize their expected utility according to this prior. The learner can reveal truthful, yet not necessarily complete, information about the classifier to the agents, aiming to release just enough information to shape the agents' behavior and thus maximize accuracy. We show that partial information release can counter-intuitively benefit the learner's accuracy, allowing qualified agents to pass the classifier while preventing unqualified agents from doing so. Despite the intractability of computing the best response of an agent in the general case, we provide oracle-efficient algorithms for scenarios where the learner's hypothesis class consists of low-dimensional linear classifiers or when the agents' cost function satisfies a sub-modularity condition. Additionally, we address the learner's optimization problem, offering both positive and negative results on determining the optimal information release to maximize expected accuracy, particularly in settings where an agent's qualification can be represented by a real-valued number.
We thank all reviewers for their time and thoughtful comments
We thank all reviewers for their time and thoughtful comments. We did not find other upper bounds with implementations that satisfied conditions (1) and (2). We would be happy to include a discussion of bounds that we believe are promising to explore. This difficulty could be overcome by rewriting an efficient implementation. Another potentially interesting bound to explore is the "sharpened" version of the bound we use, described in [4].
Meta reportedly replacing human risk assessors with AI
According to new internal documents review by NPR, Meta is allegedly planning to replace human risk assessors with AI, as the company edges closer to complete automation. Historically, Meta has relied on human analysts to evaluate the potential harms posed by new technologies across its platforms, including updates to the algorithm and safety features, part of a process known as privacy and integrity reviews. But in the near future, these essential assessments may be taken over by bots, as the company looks to automate 90 percent of this work using artificial intelligence. Despite previously stating that AI would only be used to assess "low-risk" releases, Meta is now rolling out use of the tech in decisions on AI safety, youth risk, and integrity, which includes misinformation and violent content moderation, reported NPR. Under the new system, product teams submit questionnaires and receive instant risk decisions and recommendations, with engineers taking on greater decision-making powers.
Linear Causal Representation Learning from Unknown Multi-node Interventions
Despite the multifaceted recent advances in interventional causal representation learning (CRL), they primarily focus on the stylized assumption of single-node interventions. This assumption is not valid in a wide range of applications, and generally, the subset of nodes intervened in an interventional environment is fully unknown. This paper focuses on interventional CRL under unknown multi-node (UMN) interventional environments and establishes the first identifiability results for general latent causal models (parametric or nonparametric) under stochastic interventions (soft or hard) and linear transformation from the latent to observed space. Specifically, it is established that given sufficiently diverse interventional environments, (i) identifiability up to ancestors is possible using only soft interventions, and (ii) perfect identifiability is possible using hard interventions. Remarkably, these guarantees match the best-known results for more restrictive single-node interventions. Furthermore, CRL algorithms are also provided that achieve the identifiability guarantees. A central step in designing these algorithms is establishing the relationships between UMN interventional CRL and score functions associated with the statistical models of different interventional environments. Establishing these relationships also serves as constructive proof of the identifiability guarantees.