Goto

Collaborating Authors

 stab


12151_differentially_private_general.pdf

Neural Information Processing Systems

Hence, the function over this constraint set isG-Lipschitz. Finally, in Lemma6, we provide bounds on excess empirical risk and average regret of gradient descent. Let ℓ be a non-negative eH smooth convex loss function. Let bw:= A(S), S(i) be the dataset where thei-th data point is replaced by an i.i.d. A.4 HighDimensionProofofTheorem 2. Let α 1 be a parameter to be set later.


Structure-Aware Encodings of Argumentation Properties for Clique-width

Mahmood, Yasir, Hecher, Markus, Groven, Johanna, Fichte, Johannes K.

arXiv.org Artificial Intelligence

Structural measures of graphs, such as treewidth, are central tools in computational complexity resulting in efficient algorithms when exploiting the parameter. It is even known that modern SAT solvers work efficiently on instances of small treewidth. Since these solvers are widely applied, research interests in compact encodings into (Q)SAT for solving and to understand encoding limitations. Even more general is the graph parameter clique-width, which unlike treewidth can be small for dense graphs. Although algorithms are available for clique-width, little is known about encodings. We initiate the quest to understand encoding capabilities with clique-width by considering abstract argumentation, which is a robust framework for reasoning with conflicting arguments. It is based on directed graphs and asks for computationally challenging properties, making it a natural candidate to study computational properties. We design novel reductions from argumentation problems to (Q)SAT. Our reductions linearly preserve the clique-width, resulting in directed decomposition-guided (DDG) reductions. We establish novel results for all argumentation semantics, including counting. Notably, the overhead caused by our DDG reductions cannot be significantly improved under reasonable assumptions.


Imitation Learning in Continuous Action Spaces: Mitigating Compounding Error without Interaction

Zhang, Thomas T., Pfrommer, Daniel, Matni, Nikolai, Simchowitz, Max

arXiv.org Machine Learning

We study the problem of imitating an expert demonstrator in a continuous state-and-action dynamical system. While imitation learning in discrete settings such as autoregressive language modeling has seen immense success and popularity in recent years, imitation in physical settings such as autonomous driving and robot learning has proven comparably more complex due to the compounding errors problem, often requiring elaborate set-ups to perform stably. Recent work has demonstrated that even in benign settings, exponential compounding errors are unavoidable when learning solely from expert-controlled trajectories, suggesting the need for more advanced policy parameterizations or data augmentation. To this end, we present minimal interventions that provably mitigate compounding errors in continuous state-and-action imitation learning. When the system is open-loop stable, we prescribe "action chunking," i.e., predicting and playing sequences of actions in open-loop; when the system is possibly unstable, we prescribe "noise injection," i.e., adding noise during expert demonstrations. These interventions align with popular choices in modern robot learning, though the benefits we derive are distinct from the effects they were designed to target. Our results draw insights and tools from both control theory and reinforcement learning; however, our analysis reveals novel considerations that do not naturally arise when either literature is considered in isolation.


Symmetry-Aware GFlowNets

Kim, Hohyun, Lee, Seunggeun, Oh, Min-hwan

arXiv.org Machine Learning

Generative Flow Networks (GFlowNets) offer a powerful framework for sampling graphs in proportion to their rewards. However, existing approaches suffer from systematic biases due to inaccuracies in state transition probability computations. These biases, rooted in the inherent symmetries of graphs, impact both atom-based and fragment-based generation schemes. To address this challenge, we introduce Symmetry-Aware GFlowNets (SA-GFN), a method that incorporates symmetry corrections into the learning process through reward scaling. By integrating bias correction directly into the reward structure, SA-GFN eliminates the need for explicit state transition computations. Empirical results show that SA-GFN enables unbiased sampling while enhancing diversity and consistently generating high-reward graphs that closely match the target distribution.


Facets in Argumentation: A Formal Approach to Argument Significance

Fichte, Johannes, Fröhlich, Nicolas, Hecher, Markus, Lagerkvist, Victor, Mahmood, Yasir, Meier, Arne, Persson, Jonathan

arXiv.org Artificial Intelligence

Argumentation is a central subarea of Artificial Intelligence (AI) for modeling and reasoning about arguments. The semantics of abstract argumentation frameworks (AFs) is given by sets of arguments (extensions) and conditions on the relationship between them, such as stable or admissible. Today's solvers implement tasks such as finding extensions, deciding credulous or skeptical acceptance, counting, or enumerating extensions. While these tasks are well charted, the area between decision, counting/enumeration and fine-grained reasoning requires expensive reasoning so far. We introduce a novel concept (facets) for reasoning between decision and enumeration. Facets are arguments that belong to some extensions (credulous) but not to all extensions (skeptical). They are most natural when a user aims to navigate, filter, or comprehend the significance of specific arguments, according to their needs. We study the complexity and show that tasks involving facets are much easier than counting extensions. Finally, we provide an implementation, and conduct experiments to demonstrate feasibility.


HTC Vive's Focus Vision is a 999 stab at high-end VR and mixed reality

Engadget

HTC Vive is following up its intriguing, yet expensive, XR Elite headset with something that's still quite pricey, the 999 Focus Vision. Built on the same platform as the standalone Vive Focus 3, the upgraded model adds a slew of new features like built-in eye tracking, 16MP stereo color front-facing cameras for mixed reality and automatic IPD adjustment (which makes it easier to share). And with the additional 149 DisplayPort wired streaming kit, gamers can also hook the Focus Vision up to their PCs for more intensive VR experiences. But that's to be expected. While Meta has poured tens of billions into making its Quest headsets cheaper and more accessible, without any need to worry about profitability, HTC Vive has leaned towards making more expensive headsets better suited for business and government work.


STAB: Speech Tokenizer Assessment Benchmark

Vashishth, Shikhar, Singh, Harman, Bharadwaj, Shikhar, Ganapathy, Sriram, Asawaroengchai, Chulayuth, Audhkhasi, Kartik, Rosenberg, Andrew, Bapna, Ankur, Ramabhadran, Bhuvana

arXiv.org Artificial Intelligence

Representing speech as discrete tokens provides a framework for transforming speech into a format that closely resembles text, thus enabling the use of speech as an input to the widely successful large language models (LLMs). Currently, while several speech tokenizers have been proposed, there is ambiguity regarding the properties that are desired from a tokenizer for specific downstream tasks and its overall generalizability. Evaluating the performance of tokenizers across different downstream tasks is a computationally intensive effort that poses challenges for scalability. To circumvent this requirement, we present STAB (Speech Tokenizer Assessment Benchmark), a systematic evaluation framework designed to assess speech tokenizers comprehensively and shed light on their inherent characteristics. This framework provides a deeper understanding of the underlying mechanisms of speech tokenization, thereby offering a valuable resource for expediting the advancement of future tokenizer models and enabling comparative analysis using a standardized benchmark. We evaluate the STAB metrics and correlate this with downstream task performance across a range of speech tasks and tokenizer choices.


Rejection in Abstract Argumentation: Harder Than Acceptance?

Fichte, Johannes K., Hecher, Markus, Mahmood, Yasir, Meier, Arne

arXiv.org Artificial Intelligence

Abstract argumentation is a popular toolkit for modeling, evaluating, and comparing arguments. Relationships between arguments are specified in argumentation frameworks (AFs), and conditions are placed on sets (extensions) of arguments that allow AFs to be evaluated. For more expressiveness, AFs are augmented with \emph{acceptance conditions} on directly interacting arguments or a constraint on the admissible sets of arguments, resulting in dialectic frameworks or constrained argumentation frameworks. In this paper, we consider flexible conditions for \emph{rejecting} an argument from an extension, which we call rejection conditions (RCs). On the technical level, we associate each argument with a specific logic program. We analyze the resulting complexity, including the structural parameter treewidth. Rejection AFs are highly expressive, giving rise to natural problems on higher levels of the polynomial hierarchy.


Learning In Reverse Causal Strategic Environments With Ramifications on Two Sided Markets

Somerstep, Seamus, Sun, Yuekai, Ritov, Ya'acov

arXiv.org Machine Learning

Motivated by equilibrium models of labor markets, we develop a formulation of causal strategic classification in which strategic agents can directly manipulate their outcomes. As an application, we compare employers that anticipate the strategic response of a labor force with employers that do not. We show through a combination of theory and experiment that employers with performatively optimal hiring policies improve employer reward, labor force skill level, and in some cases labor force equity. On the other hand, we demonstrate that performative employers harm labor force utility and fail to prevent discrimination in other cases. In many applications of predictive modeling, the model itself may affect the distribution of samples on which it has to make predictions; this problem is known as strategic classification (Hardt et al., 2015; Brückner et al., 2012) or performative prediction (Perdomo et al., 2020). For example, traffic predictions affect route decisions, which ultimately impact traffic. Such situations can arise in a variety of applications; a common theme is that the samples correspond to strategic agents with an incentive to "game the system" and elicit a desired outcome from the model. In the standard strategic classification setup, the agents are allowed to modify their features, but they do not modify the outcome that the predictive model targets. An example of this is spam classification: spammers craft their messages (e.g. There is a line of work on causal strategic classification that seeks to generalize this setup by allowing the agents to change both their features and outcomes, usually by incorporating a causal model between the two (Miller et al., 2020; Kleinberg and Raghavan, 2020; Haghtalab et al., 2023; Horowitz and Rosenfeld, 2023).


Almost Surely $\sqrt{T}$ Regret Bound for Adaptive LQR

Lu, Yiwen, Mo, Yilin

arXiv.org Artificial Intelligence

The Linear-Quadratic Regulation (LQR) problem with unknown system parameters has been widely studied, but it has remained unclear whether $\tilde{ \mathcal{O}}(\sqrt{T})$ regret, which is the best known dependence on time, can be achieved almost surely. In this paper, we propose an adaptive LQR controller with almost surely $\tilde{ \mathcal{O}}(\sqrt{T})$ regret upper bound. The controller features a circuit-breaking mechanism, which circumvents potential safety breach and guarantees the convergence of the system parameter estimate, but is shown to be triggered only finitely often and hence has negligible effect on the asymptotic performance of the controller. The proposed controller is also validated via simulation on Tennessee Eastman Process~(TEP), a commonly used industrial process example.