Goto

Collaborating Authors

 cladder


CLadder: Assessing Causal Reasoning in Language Models

Neural Information Processing Systems

The ability to perform causal reasoning is widely considered a core feature of intelligence. In this work, we investigate whether large language models (LLMs) can coherently reason about causality. Much of the existing work in natural language processing (NLP) focuses on evaluating causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined . To address this, we propose a new NLP task,, inspired by the postulated by Judea Pearl et al. We compose a large dataset, CLadder, with 10K samples: based on a collection of causal graphs and queries (associational, interventional, and counterfactual), we obtain symbolic questions and ground-truth answers, through an oracle causal inference engine. These are then translated into natural language. We evaluate multiple LLMs on our dataset, and we introduce and evaluate a bespoke chain-of-thought prompting strategy, CausalCoT. We show that our task is highly challenging for LLMs, and we conduct an in-depth analysis to gain deeper insight into the causal reasoning abilities of LLMs.


CLadder: Assessing Causal Reasoning in Language Models

Neural Information Processing Systems

The ability to perform causal reasoning is widely considered a core feature of intelligence. In this work, we investigate whether large language models (LLMs) can coherently reason about causality. Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules. To address this, we propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al. We compose a large dataset, CLadder, with 10K samples: based on a collection of causal graphs and queries (associational, interventional, and counterfactual), we obtain symbolic questions and ground-truth answers, through an oracle causal inference engine.


Convolutional Ladder Networks for Legal NERC and the Impact of Unsupervised Data in Better Generalizations

Cardellino, Cristian (National University of Córdoba) | Alemany, Laura Alonso (National University of Córdoba) | Teruel, Milagro (National University of Córdoba) | Villata, Serena (Université Côte d'Azur) | Marro, Santiago (National University of Córdoba)

AAAI Conferences

In this paper we adapt the semi-supervised deep learning architecture known as Convolutional Ladder Networks, from the domain of computer vision, and explore how well it works for a semi-supervised Named Entity Recognition and Classification task with legal data. The idea of exploring a semi-supervised technique is to asses the impact of large amounts of unsupervised data (cheap to obtain) in specific tasks that have little annotated data, in order to develop robust models that are less prone to overfitting. In order to achieve this, first we must check the impact on a task that is easier to measure. We are presenting some preliminary results, however, the experiments carried out show some very interesting insights that foster further research in the topic.