Goto

Collaborating Authors

 chakraborty




A Comprehensive Dataset for Human vs. AI Generated Text Detection

Roy, Rajarshi, Imanpour, Nasrin, Aziz, Ashhar, Bajpai, Shashwat, Singh, Gurpreet, Biswas, Shwetangshu, Wanaskar, Kapil, Patwa, Parth, Ghosh, Subhankar, Dixit, Shreyas, Pal, Nilesh Ranjan, Rawte, Vipula, Garimella, Ritvik, Jena, Gaytri, Sheth, Amit, Sharma, Vasu, Reganti, Aishwarya Naresh, Jain, Vinija, Chadha, Aman, Das, Amitava

arXiv.org Artificial Intelligence

The rapid advancement of large language models (LLMs) has led to increasingly human-like AI-generated text, raising concerns about content authenticity, misinformation, and trustworthiness. Addressing the challenge of reliably detecting AI-generated text and attributing it to specific models requires large-scale, diverse, and well-annotated datasets. In this work, we present a comprehensive dataset comprising over 58,000 text samples that combine authentic New York Times articles with synthetic versions generated by multiple state-of-the-art LLMs including Gemma-2-9b, Mistral-7B, Qwen-2-72B, LLaMA-8B, Yi-Large, and GPT-4-o. The dataset provides original article abstracts as prompts, full human-authored narratives. We establish baseline results for two key tasks: distinguishing human-written from AI-generated text, achieving an accuracy of 58.35\%, and attributing AI texts to their generating models with an accuracy of 8.92\%. By bridging real-world journalistic content with modern generative models, the dataset aims to catalyze the development of robust detection and attribution methods, fostering trust and transparency in the era of generative AI. Our dataset is available at: https://huggingface.co/datasets/gsingh1-py/train.


DeHate: A Stable Diffusion-based Multimodal Approach to Mitigate Hate Speech in Images

Dalal, Dwip, Vashishtha, Gautam, Rani, Anku, Reganti, Aishwarya, Patwa, Parth, Sarique, Mohd, Gupta, Chandan, Nath, Keshav, Reddy, Viswanatha, Jain, Vinija, Chadha, Aman, Das, Amitava, Sheth, Amit, Ekbal, Asif

arXiv.org Artificial Intelligence

The rise in harmful online content not only distorts public discourse but also poses significant challenges to maintaining a healthy digital environment. In response to this, we introduce a multimodal dataset uniquely crafted for identifying hate in digital content. Central to our methodology is the innovative application of watermarked, stability-enhanced, stable diffusion techniques combined with the Digital Attention Analysis Module (DAAM). This combination is instrumental in pinpointing the hateful elements within images, thereby generating detailed hate attention maps, which are used to blur these regions from the image, thereby removing the hateful sections of the image. We release this data set as a part of the dehate shared task. This paper also describes the details of the shared task. Furthermore, we present DeHater, a vision-language model designed for multimodal dehatification tasks. Our approach sets a new standard in AI-driven image hate detection given textual prompts, contributing to the development of more ethical AI applications in social media.



Causality and Decision-making: A Logical Framework for Systems and Security Modelling

Chakraborty, Pinaki, Caulfield, Tristan, Pym, David

arXiv.org Artificial Intelligence

Causal reasoning is essential for understanding decision-making about the behaviour of complex `ecosystems' of systems that underpin modern society, with security -- including issues around correctness, safety, resilience, etc. -- typically providing critical examples. We present a theory of strategic reasoning about system modelling based on minimal structural assumptions and employing the methods of transition systems, supported by a modal logic of system states in the tradition of van Benthem, Hennessy, and Milner, and validated through equivalence theorems. Our framework introduces an intervention operator and a separating conjunction to capture actual causal relationships between component systems of the ecosystem, aligning naturally with Halpern and Pearl's counterfactual approach based on Structural Causal Models. We illustrate the applicability through examples of of decision-making about microservices in distributed systems. We discuss localized decision-making through a separating conjunction. This work unifies a formal, minimalistic notion of system behaviour with a Halpern--Pearl-compatible theory of counterfactual reasoning, providing a logical foundation for studying decision making about causality in complex interacting systems.


ViSP: A PPO-Driven Framework for Sarcasm Generation with Contrastive Learning

Wang, Changli, Wu, Rui, Yin, Fang

arXiv.org Artificial Intelligence

Human emotions are complex, with sarcasm being a subtle and distinctive form. Despite progress in sarcasm research, sarcasm generation remains underexplored, primarily due to the overreliance on textual modalities and the neglect of visual cues, as well as the mismatch between image content and sarcastic intent in existing datasets. In this paper, we introduce M2SaG, a multimodal sarcasm generation dataset with 4,970 samples, each containing an image, a sarcastic text, and a sarcasm target. To benchmark M2SaG, we propose ViSP, a generation framework that integrates Proximal Policy Optimization (PPO) and contrastive learning. PPO utilizes reward scores from DIP to steer the generation of sarcastic texts, while contrastive learning encourages the model to favor outputs with higher reward scores. These strategies improve overall generation quality and produce texts with more pronounced sarcastic intent. We evaluate ViSP across five metric sets and find it surpasses all baselines, including large language models, underscoring their limitations in sarcasm generation. Furthermore, we analyze the distributions of Sarcasm Scores and Factual Incongruity for both M2SaG and the texts generated by ViSP. The generated texts exhibit higher mean Sarcasm Scores (0.898 vs. 0.770) and Factual Incongruity (0.768 vs. 0.739), demonstrating that ViSP produces higher-quality sarcastic content than the original dataset. % The dataset and code will be publicly available. Our dataset and code will be released at \textit{https://github.com/wclapply/ViSP}.


Decoding Memes: Benchmarking Narrative Role Classification across Multilingual and Multimodal Models

Sharma, Shivam, Chakraborty, Tanmoy

arXiv.org Artificial Intelligence

Abstract--This work investigates the challenging task of identifying narrative roles - Hero, Villain, Victim, and Other - in Internet memes, across three diverse test sets spanning English and code-mixed (English-Hindi) languages. Building on an annotated dataset originally skewed toward the'Other' class, we explore a more balanced and linguistically diverse extension, originally introduced as part of the CLEF 2024 shared task. Comprehensive lexical and structural analyses highlight the nuanced, culture-specific, and context-rich language used in real memes, in contrast to synthetically curated hateful content, which exhibits explicit and repetitive lexical markers. T o benchmark the role detection task, we evaluate a wide spectrum of models, including fine-tuned multilingual transformers, sentiment and abuse-aware classifiers, instruction-tuned LLMs, and multimodal vision-language models. Performance is assessed under zero-shot settings using precision, recall, and F1 metrics. W e also explore prompt design strategies to guide multi-modal models and find that hybrid prompts incorporating structured instructions and role definitions offer marginal yet consistent improvements. Our findings underscore the importance of cultural grounding, prompt engineering, and multimodal reasoning in modelling subtle narrative framings in visual-textual content. W arning: This paper contains potentially harmful and offensive content. I. Introduction Social media platforms have become pivotal arenas for rapid information dissemination. However, this openness has also catalysed the proliferation of harmful content - including hate speech, propaganda, and misinformation, often embedded within memes [1], [2]. Memes, with their multimodal structure and cultural resonance, are particularly potent in shaping public opinion and propagating ideologies.


Can LLMs reason over extended multilingual contexts? Towards long-context evaluation beyond retrieval and haystacks

Hengle, Amey, Bajpai, Prasoon, Dan, Soham, Chakraborty, Tanmoy

arXiv.org Artificial Intelligence

Existing multilingual long-context benchmarks, often based on the popular needle-in-a-haystack test, primarily evaluate a model's ability to locate specific information buried within irrelevant texts. However, such a retrieval-centric approach is myopic and inherently limited, as successful recall alone does not indicate a model's capacity to reason over extended contexts. Moreover, these benchmarks are susceptible to data leakage, short-circuiting, and risk making the evaluation a priori identifiable. To address these limitations, we introduce MLRBench, a new synthetic benchmark for multilingual long-context reasoning. Unlike existing benchmarks, MLRBench goes beyond surface-level retrieval by including tasks that assess multi-hop inference, aggregation, and epistemic reasoning. Spanning seven languages, MLRBench is designed to be parallel, resistant to leakage, and scalable to arbitrary context lengths. Our extensive experiments with an open-weight large language model (LLM) reveal a pronounced gap between high- and low-resource languages, particularly for tasks requiring the model to aggregate multiple facts or predict the absence of information. We also find that, in multilingual settings, LLMs effectively utilize less than 30% of their claimed context length. Although off-the-shelf Retrieval Augmented Generation helps alleviate this to a certain extent, it does not solve the long-context problem. We open-source MLRBench to enable future research in improved evaluation and training of multilingual LLMs.


Will I Get Hate Speech Predicting the Volume of Abusive Replies before Posting in Social Media

Alharthi, Raneem, Alharthi, Rajwa, Shekhar, Ravi, Jiang, Aiqi, Zubiaga, Arkaitz

arXiv.org Artificial Intelligence

Despite the growing body of research tackling offensive language in social media, this research is predominantly reactive, determining if content already posted in social media is abusive. There is a gap in predictive approaches, which we address in our study by enabling to predict the volume of abusive replies a tweet will receive after being posted. We formulate the problem from the perspective of a social media user asking: ``if I post a certain message on social media, is it possible to predict the volume of abusive replies it might receive?'' We look at four types of features, namely text, text metadata, tweet metadata, and account features, which also help us understand the extent to which the user or the content helps predict the number of abusive replies. This, in turn, helps us develop a model to support social media users in finding the best way to post content. One of our objectives is also to determine the extent to which the volume of abusive replies that a tweet will get are motivated by the content of the tweet or by the identity of the user posting it. Our study finds that one can build a model that performs competitively by developing a comprehensive set of features derived from the content of the message that is going to be posted. In addition, our study suggests that features derived from the user's identity do not impact model performance, hence suggesting that it is especially the content of a post that triggers abusive replies rather than who the user is.