Furniturewala, Shaz
Turn-Level Empathy Prediction Using Psychological Indicators
Furniturewala, Shaz, Jaidka, Kokil
For the WASSA 2024 Empathy and Personality Prediction Shared Task, we propose a novel turn-level empathy detection method that decomposes empathy into six psychological indicators: Emotional Language, Perspective-Taking, Sympathy and Compassion, Extroversion, Openness, and Agreeableness. A pipeline of text enrichment using a Large Language Model (LLM) followed by DeBERTA fine-tuning demonstrates a significant improvement in the Pearson Correlation Coefficient and F1 scores for empathy detection, highlighting the effectiveness of our approach. Our system officially ranked 7th at the CONV-turn track.
Beyond Text: Leveraging Multi-Task Learning and Cognitive Appraisal Theory for Post-Purchase Intention Analysis
Yeo, Gerard Christopher, Furniturewala, Shaz, Jaidka, Kokil
Our empirical investigation specifically Natural language processing (NLP) tasks involve targets the nuances of purchase behavior, guided by predicting outcomes from text, ranging from the implicit a focus on two critical dimensions as illuminated attributes of text to the subsequent behavior of by Cognitive Appraisal Theory: the author or the reader. Recent research suggests Cognitive appraisals: The multifaceted evaluative that user-level features can carry more task-related processes through which consumers engage information than the text itself (Lynn et al., 2019), with and interpret their interactions with products, but these experiments have been conducted in a limited including, but not limited to, the novelty and pleasantness scope. Other studies have explored how the linguistic of the consumer-product encounter (Yeo characteristics of text, such as its politeness and Ong, 2023).
Thinking Fair and Slow: On the Efficacy of Structured Prompts for Debiasing Language Models
Furniturewala, Shaz, Jandial, Surgan, Java, Abhinav, Banerjee, Pragyan, Shahid, Simra, Bhatia, Sumit, Jaidka, Kokil
Existing debiasing techniques are typically training-based or require access to the model's internals and output distributions, so they are inaccessible to end-users looking to adapt LLM outputs for their particular needs. In this study, we examine whether structured prompting techniques can offer opportunities for fair text generation. We evaluate a comprehensive end-user-focused iterative framework of debiasing that applies System 2 thinking processes for prompts to induce logical, reflective, and critical text generation, with single, multi-step, instruction, and role-based variants. By systematically evaluating many LLMs across many datasets and different prompting strategies, we show that the more complex System 2-based Implicative Prompts significantly improve over other techniques demonstrating lower mean bias in the outputs with competitive performance on the downstream tasks. Our work offers research directions for the design and the potential of end-user-focused evaluative frameworks for LLM use.
All Should Be Equal in the Eyes of Language Models: Counterfactually Aware Fair Text Generation
Banerjee, Pragyan, Java, Abhinav, Jandial, Surgan, Shahid, Simra, Furniturewala, Shaz, Krishnamurthy, Balaji, Bhatia, Sumit
Fairness in Language Models (LMs) remains a longstanding challenge, given the inherent biases in training data that can be perpetuated by models and affect the downstream tasks. Recent methods employ expensive retraining or attempt debiasing during inference by constraining model outputs to contrast from a reference set of biased templates or exemplars. Regardless, they dont address the primary goal of fairness to maintain equitability across different demographic groups. In this work, we posit that inferencing LMs to generate unbiased output for one demographic under a context ensues from being aware of outputs for other demographics under the same context. To this end, we propose Counterfactually Aware Fair InferencE (CAFIE), a framework that dynamically compares the model understanding of diverse demographics to generate more equitable sentences. We conduct an extensive empirical evaluation using base LMs of varying sizes and across three diverse datasets and found that CAFIE outperforms strong baselines. CAFIE produces fairer text and strikes the best balance between fairness and language modeling capability
Rhetorical Role Labeling of Legal Documents using Transformers and Graph Neural Networks
Gupta, Anshika, Furniturewala, Shaz, Kumari, Vijay, Sharma, Yashvardhan
A legal document is usually long and dense requiring human effort to parse it. It also contains significant amounts of jargon which make deriving insights from it using existing models a poor approach. This paper presents the approaches undertaken to perform the task of rhetorical role labelling on Indian Court Judgements as part of SemEval Task 6: understanding legal texts, shared subtask A. We experiment with graph based approaches like Graph Convolutional Networks and Label Propagation Algorithm, and transformer-based approaches including variants of BERT to improve accuracy scores on text classification of complex legal documents.