Alhanahnah, Mohannad
PEA: Enhancing LLM Performance on Computational-Reasoning Tasks
Wang, Zi, Weng, Shiwei, Alhanahnah, Mohannad, Jha, Somesh, Reps, Tom
Large Language Models (LLMs) have exhibited significant generalization capabilities across diverse domains, prompting investigations into their potential as generic reasoning engines. Recent studies have explored inference-time computation techniques [Welleck et al., 2024, Snell et al., 2024], particularly prompt engineering methods such as Chain-of-Thought (CoT), to enhance LLM performance on complex reasoning tasks [Wei et al., 2022]. These approaches have successfully improved model performance and expanded LLMs' practical applications. However, despite the growing focus on enhancing model capabilities through inference-time computation for complex reasoning tasks, the current literature lacks a formal framework to precisely describe and characterize the complexity of reasoning problems. This study identifies a class of reasoning problems, termed computational reasoning problems, which are particularly challenging for LLMs [Yao et al., 2023, Hao et al., 2024, Valmeekam et al., 2023], such as planning problems and arithmetic games. Informally, these problems can be accurately described using succinct programmatic representations. We propose a formal framework to describe and algorithmically solve these problems. The framework employs first-order logic, equipped with efficiently computable predicates and finite domains.
Machine Learning Systems are Bloated and Vulnerable
Zhang, Huaifeng, Ahmed, Fahmi Abdulqadir, Fatih, Dyako, Kitessa, Akayou, Alhanahnah, Mohannad, Leitner, Philipp, Ali-Eldin, Ahmed
Today's software is bloated with both code and features that are not used by most users. This bloat is prevalent across the entire software stack, from the operating system, all the way to software backends, frontends, and web-pages. In this paper, we focus on analyzing and quantifying bloat in machine learning containers. We develop MMLB, a framework to analyze bloat in machine learning containers, measuring the amount of bloat that exists on the container and package levels. Our tool quantifies the sources of bloat and integrates with vulnerability analysis tools to evaluate the impact of bloat on container vulnerabilities. Through experimentation with 15 machine learning containers from Tensorflow, Pytorch, and NVIDIA, we show that bloat is a significant issue, accounting for up to 80% of the container size in some cases. Our results demonstrate that bloat significantly increases the container provisioning time by up to 370% and exacerbates vulnerabilities by up to 99%.