Goto

Collaborating Authors

 Kiel


Optimal Mistake Bounds for Transductive Online Learning

Chase, Zachary, Hanneke, Steve, Moran, Shay, Shafer, Jonathan

arXiv.org Machine Learning

We resolve a 30-year-old open problem concerning the power of unlabeled data in online learning by tightly quantifying the gap between transductive and standard online learning. In the standard setting, the optimal mistake bound is characterized by the Littlestone dimension $d$ of the concept class $H$ (Littlestone 1987). We prove that in the transductive setting, the mistake bound is at least $Ω(\sqrt{d})$. This constitutes an exponential improvement over previous lower bounds of $Ω(\log\log d)$, $Ω(\sqrt{\log d})$, and $Ω(\log d)$, due respectively to Ben-David, Kushilevitz, and Mansour (1995, 1997) and Hanneke, Moran, and Shafer (2023). We also show that this lower bound is tight: for every $d$, there exists a class of Littlestone dimension $d$ with transductive mistake bound $O(\sqrt{d})$. Our upper bound also improves upon the best known upper bound of $(2/3)d$ from Ben-David, Kushilevitz, and Mansour (1997). These results establish a quadratic gap between transductive and standard online learning, thereby highlighting the benefit of advance access to the unlabeled instance sequence. This contrasts with the PAC setting, where transductive and standard learning exhibit similar sample complexities.


Model-free filtering in high dimensions via projection and score-based diffusions

Christensen, Sören, Kallsen, Jan, Strauch, Claudia, Trottner, Lukas

arXiv.org Machine Learning

We consider the problem of recovering a latent signal $X$ from its noisy observation $Y$. The unknown law $\mathbb{P}^X$ of $X$, and in particular its support $\mathscr{M}$, are accessible only through a large sample of i.i.d.\ observations. We further assume $\mathscr{M}$ to be a low-dimensional submanifold of a high-dimensional Euclidean space $\mathbb{R}^d$. As a filter or denoiser $\widehat X$, we suggest an estimator of the metric projection $π_{\mathscr{M}}(Y)$ of $Y$ onto the manifold $\mathscr{M}$. To compute this estimator, we study an auxiliary semiparametric model in which $Y$ is obtained by adding isotropic Laplace noise to $X$. Using score matching within a corresponding diffusion model, we obtain an estimator of the Bayesian posterior $\mathbb{P}^{X \mid Y}$ in this setup. Our main theoretical results show that, in the limit of high dimension $d$, this posterior $\mathbb{P}^{X\mid Y}$ is concentrated near the desired metric projection $π_{\mathscr{M}}(Y)$.


Evaluating NLP Embedding Models for Handling Science-Specific Symbolic Expressions in Student Texts

Bleckmann, Tom, Tschisgale, Paul

arXiv.org Artificial Intelligence

In recent years, natural language processing (NLP) has become integral to educational data mining, particularly in the analysis of student-generated language products. For research and assessment purposes, so-called embedding models are typically employed to generate numeric representations of text that capture its semantic content for use in subsequent quantitative analyses. Y et when it comes to science-related language, symbolic expressions such as equations and formulas introduce challenges that current embedding models struggle to address. Existing research studies and practical applications often either overlook these challenges or remove symbolic expressions altogether, potentially leading to biased research findings and diminished performance of practical applications. This study therefore explores how contemporary embedding models differ in their capability to process and interpret science-related symbolic expressions. To this end, various embedding models are evaluated using physics-specific symbolic expressions drawn from authentic student responses, with performance assessed via two approaches: 1) similarity-based analyses and 2) integration into a machine learning pipeline. Our findings reveal significant differences in model performance, with OpenAI's GPT-text-embedding-3-large outperforming all other examined models, though its advantage over other models was moderate rather than decisive. Overall, this study underscores the importance for educational data mining researchers and practitioners of carefully selecting NLP embedding models when working with science-related language products that include symbolic expressions. The code and (partial) data are available at https: //doi.org/10.17605/OSF.IO/6XQVG.


Evaluating GPT- and Reasoning-based Large Language Models on Physics Olympiad Problems: Surpassing Human Performance and Implications for Educational Assessment

Tschisgale, Paul, Maus, Holger, Kieser, Fabian, Kroehs, Ben, Petersen, Stefan, Wulff, Peter

arXiv.org Artificial Intelligence

Large language models (LLMs) are now widely accessible, reaching learners at all educational levels. This development has raised concerns that their use may circumvent essential learning processes and compromise the integrity of established assessment formats. In physics education, where problem solving plays a central role in instruction and assessment, it is therefore essential to understand the physics-specific problem-solving capabilities of LLMs. Such understanding is key to informing responsible and pedagogically sound approaches to integrating LLMs into instruction and assessment. This study therefore compares the problem-solving performance of a general-purpose LLM (GPT-4o, using varying prompting techniques) and a reasoning-optimized model (o1-preview) with that of participants of the German Physics Olympiad, based on a set of well-defined Olympiad problems. In addition to evaluating the correctness of the generated solutions, the study analyzes characteristic strengths and limitations of LLM-generated solutions. The findings of this study indicate that both tested LLMs (GPT-4o and o1-preview) demonstrate advanced problem-solving capabilities on Olympiad-type physics problems, on average outperforming the human participants. Prompting techniques had little effect on GPT-4o's performance, while o1-preview almost consistently outperformed both GPT-4o and the human benchmark. Based on these findings, the study discusses implications for the design of summative and formative assessment in physics education, including how to uphold assessment integrity and support students in critically engaging with LLMs.


DipLLM: Fine-Tuning LLM for Strategic Decision-making in Diplomacy

Xu, Kaixuan, Chai, Jiajun, Li, Sicheng, Fu, Yuqian, Zhu, Yuanheng, Zhao, Dongbin

arXiv.org Artificial Intelligence

Diplomacy is a complex multiplayer game that requires both cooperation and competition, posing significant challenges for AI systems. Traditional methods rely on equilibrium search to generate extensive game data for training, which demands substantial computational resources. Large Language Models (LLMs) offer a promising alternative, leveraging pre-trained knowledge to achieve strong performance with relatively small-scale fine-tuning. However, applying LLMs to Diplomacy remains challenging due to the exponential growth of possible action combinations and the intricate strategic interactions among players. To address this challenge, we propose DipLLM, a fine-tuned LLM-based agent that learns equilibrium policies for Diplomacy. DipLLM employs an autoregressive factorization framework to simplify the complex task of multi-unit action assignment into a sequence of unit-level decisions. By defining an equilibrium policy within this framework as the learning objective, we fine-tune the model using only 1.5% of the data required by the state-of-the-art Cicero model, surpassing its performance. Our results demonstrate the potential of fine-tuned LLMs for tackling complex strategic decision-making in multiplayer games.


Prediction of steady states in a marine ecosystem model by a machine learning technique

Mahfuz, Sarker Miraz, Slawig, Thomas

arXiv.org Artificial Intelligence

We used precomputed steady states obtained by a spin-up for a global marine ecosystem model as training data to build a mapping from the small number of biogeochemical model parameters onto the three-dimensional converged steady annual cycle. The mapping was performed by a conditional variational autoencoder (CVAE) with mass correction. Applied for test data, we show that the prediction obtained by the CVAE already gives a reasonable good approximation of the steady states obtained by a regular spin-up. However, the predictions do not reach the same level of annual periodicity as those obtained in the original spin-up data. Thus, we took the predictions as initial values for a spin-up. We could show that the number of necessary iterations, corresponding to model years, to reach a prescribed stopping criterion in the spin-up could be significantly reduced compared to the use of the originally uniform, constant initial value. The amount of reduction depends on the applied stopping criterion, measuring the periodicity of the solution. The savings in needed iterations and, thus, computing time for the spin-up ranges from 50 to 95\%, depending on the stopping criterion for the spin-up. We compared these results with the use of the mean of the training data as an initial value. We found that this also accelerates the spin-up, but only by a much lower factor.


Deconstructing Subset Construction -- Reducing While Determinizing

Nicol, John, Frohme, Markus

arXiv.org Artificial Intelligence

We present a novel perspective on the NFA canonization problem, which introduces intermediate minimization steps to reduce the exploration space on-the-fly. Essential to our approach are so-called equivalence registries which manage information about equivalent states and allow for incorporating further optimization techniques such as convexity closures or simulation to boost performance. Due to the generality of our approach, these concepts can be embedded in classic subset construction or Brzozowski's approach. We evaluate our approach on a set of real-world examples from automatic sequences and observe that we are able to improve especially worst-case scenarios. We implement our approach in an open-source library for users to experiment with.


Improving Applicability of Deep Learning based Token Classification models during Training

Mehra, Anket, Prieß, Malte, Himstedt, Marian

arXiv.org Artificial Intelligence

This paper shows that further evaluation metrics during model training are needed to decide about its applicability in inference. As an example, a LayoutLM-based model is trained for token classification in documents. The documents are German receipts. We show that conventional classification metrics, represented by the F1-Score in our experiments, are insufficient for evaluating the applicability of machine learning models in practice. To address this problem, we introduce a novel metric, Document Integrity Precision (DIP), as a solution for visual document understanding and the token classification task. To the best of our knowledge, nothing comparable has been introduced in this context. DIP is a rigorous metric, describing how many documents of the test dataset require manual interventions. It enables AI researchers and software developers to conduct an in-depth investigation of the level of process automation in business software. In order to validate DIP, we conduct experiments with our created models to highlight and analyze the impact and relevance of DIP to evaluate if the model should be deployed or not in different training settings. Our results demonstrate that existing metrics barely change for isolated model impairments, whereas DIP indicates that the model requires substantial human interventions in deployment. The larger the set of entities being predicted, the less sensitive conventional metrics are, entailing poor automation quality. DIP, in contrast, remains a single value to be interpreted for entire entity sets. This highlights the importance of having metrics that focus on the business task for model training in production. Since DIP is created for the token classification task, more research is needed to find suitable metrics for other training tasks.


Concept Map Assessment Through Structure Classification

Vossen, Laís P. V., Gasparini, Isabela, Oliveira, Elaine H. T., Czinczel, Berrit, Harms, Ute, Menzel, Lukas, Gombert, Sebastian, Neumann, Knut, Drachsler, Hendrik

arXiv.org Artificial Intelligence

Due to their versatility, concept maps are used in various educational settings and serve as tools that enable educators to comprehend students' knowledge construction. An essential component for analyzing a concept map is its structure, which can be categorized into three distinct types: spoke, network, and chain. Understanding the predominant structure in a map offers insights into the student's depth of comprehension of the subject. Therefore, this study examined 317 distinct concept map structures, classifying them into one of the three types, and used statistical and descriptive information from the maps to train multiclass classification models. As a result, we achieved an 86\% accuracy in classification using a Decision Tree. This promising outcome can be employed in concept map assessment systems to provide real-time feedback to the student.


Graph of Effort: Quantifying Risk of AI Usage for Vulnerability Assessment

Mehra, Anket, Aßmuth, Andreas, Prieß, Malte

arXiv.org Artificial Intelligence

With AI-based software becoming widely available, the risk of exploiting its capabilities, such as high automation and complex pattern recognition, could significantly increase. An AI used offensively to attack non-AI assets is referred to as offensive AI. Current research explores how offensive AI can be utilized and how its usage can be classified. Additionally, methods for threat modeling are being developed for AI-based assets within organizations. However, there are gaps that need to be addressed. Firstly, there is a need to quantify the factors contributing to the AI threat. Secondly, there is a requirement to create threat models that analyze the risk of being attacked by AI for vulnerability assessment across all assets of an organization. This is particularly crucial and challenging in cloud environments, where sophisticated infrastructure and access control landscapes are prevalent. The ability to quantify and further analyze the threat posed by offensive AI enables analysts to rank vulnerabilities and prioritize the implementation of proactive countermeasures. To address these gaps, this paper introduces the Graph of Effort, an intuitive, flexible, and effective threat modeling method for analyzing the effort required to use offensive AI for vulnerability exploitation by an adversary. While the threat model is functional and provides valuable support, its design choices need further empirical validation in future work.