Goto

Collaborating Authors

 Diallo, Aissatou


Rule-Guided Feedback: Enhancing Reasoning by Enforcing Rule Adherence in Large Language Models

arXiv.org Artificial Intelligence

In this paper, we introduce Rule-Guided Feedback (RGF), a framework designed to enhance Large Language Model (LLM) performance through structured rule adherence and strategic information seeking. RGF implements a teacher-student paradigm where rule-following is forced through established guidelines. Our framework employs a Teacher model that rigorously evaluates each student output against task-specific rules, providing constructive guidance rather than direct answers when detecting deviations. This iterative feedback loop serves two crucial purposes: maintaining solutions within defined constraints and encouraging proactive information seeking to resolve uncertainties. We evaluate RGF on diverse tasks including Checkmate-in-One puzzles, Sonnet Writing, Penguins-In-a-Table classification, GSM8k, and StrategyQA. Our findings suggest that structured feedback mechanisms can significantly enhance LLMs' performance across various domains.


RESPONSE: Benchmarking the Ability of Language Models to Undertake Commonsense Reasoning in Crisis Situation

arXiv.org Artificial Intelligence

An interesting class of commonsense reasoning problems arises when people are faced with natural disasters. To investigate this topic, we present \textsf{RESPONSE}, a human-curated dataset containing 1789 annotated instances featuring 6037 sets of questions designed to assess LLMs' commonsense reasoning in disaster situations across different time frames. The dataset includes problem descriptions, missing resources, time-sensitive solutions, and their justifications, with a subset validated by environmental engineers. Through both automatic metrics and human evaluation, we compare LLM-generated recommendations against human responses. Our findings show that even state-of-the-art models like GPT-4 achieve only 37\% human-evaluated correctness for immediate response actions, highlighting significant room for improvement in LLMs' ability for commonsense reasoning in crises.


IAO Prompting: Making Knowledge Flow Explicit in LLMs through Structured Reasoning Templates

arXiv.org Artificial Intelligence

While Large Language Models (LLMs) demonstrate impressive reasoning capabilities, understanding and validating their knowledge utilization remains challenging. Chain-of-thought (CoT) prompting partially addresses this by revealing intermediate reasoning steps, but the knowledge flow and application remain implicit. We introduce IAO (Input-Action-Output) prompting, a structured template-based method that explicitly models how LLMs access and apply their knowledge during complex reasoning tasks. IAO decomposes problems into sequential steps, each clearly identifying the input knowledge being used, the action being performed, and the resulting output. This structured decomposition enables us to trace knowledge flow, verify factual consistency, and identify potential knowledge gaps or misapplications. Through experiments across diverse reasoning tasks, we demonstrate that IAO not only improves zero-shot performance but also provides transparency in how LLMs leverage their stored knowledge. Human evaluation confirms that this structured approach enhances our ability to verify knowledge utilization and detect potential hallucinations or reasoning errors. Our findings provide insights into both knowledge representation within LLMs and methods for more reliable knowledge application.


Unsupervised Learning of Graph from Recipes

arXiv.org Artificial Intelligence

Cooking recipes are one of the most readily available kinds of procedural text. They consist of natural language instructions that can be challenging to interpret. In this paper, we propose a model to identify relevant information from recipes and generate a graph to represent the sequence of actions in the recipe. In contrast with other approaches, we use an unsupervised approach. We iteratively learn the graph structure and the parameters of a $\mathsf{GNN}$ encoding the texts (text-to-graph) one sequence at a time while providing the supervision by decoding the graph into text (graph-to-text) and comparing the generated text to the input. We evaluate the approach by comparing the identified entities with annotated datasets, comparing the difference between the input and output texts, and comparing our generated graphs with those generated by state of the art methods.


PizzaCommonSense: Learning to Model Commonsense Reasoning about Intermediate Steps in Cooking Recipes

arXiv.org Artificial Intelligence

Decoding the core of procedural texts, exemplified by cooking recipes, is crucial for intelligent reasoning and instruction automation. Procedural texts can be comprehensively defined as a sequential chain of steps to accomplish a task employing resources. From a cooking perspective, these instructions can be interpreted as a series of modifications to a food preparation, which initially comprises a set of ingredients. These changes involve transformations of comestible resources. For a model to effectively reason about cooking recipes, it must accurately discern and understand the inputs Figure 1: A graphical depiction of the PizzaCommonsense and outputs of intermediate steps within the underlying motivation. Models are required to recipe. Aiming to address this, we present a learn knowledge about the input and output of each intermediate new corpus of cooking recipes enriched with step and predict the correct sequencing of descriptions of intermediate steps of the recipes these comestibles given the corresponding instructions that explicate the input and output for each step.


A Graphical Formalism for Commonsense Reasoning with Recipes

arXiv.org Artificial Intelligence

To used for actions and comestibles; Section 3 presents a address this shortcoming, we propose a high-level representation representation of recipes as bipartite graphs; Section 4 considers of recipes as labelled bipartite graphs where the acceptability of recipes; Section 5 presents definitions first subset of nodes denotes the comestibles involved in the for comparing recipes; Section 6 presents definitions for recipe (ingredients, intermediate food items, final products, composition of recipes from subrecipes; Section 7 presents i.e. dishes, and by-products) and the second subset of nodes substitution based on changing the type of nodes; Section 8 denotes actions on those comestibles. The edges reflect the presents substitution based on changing the structure of the (possibly partial) sequence of steps taken in the recipe going graph; Section 9 discusses related literature; and Section 10 from the ingredients to final products.