Goto

Collaborating Authors

 Sen, Jaydip


Confidence-Modulated Speculative Decoding for Large Language Models

Sen, Jaydip, Dasgupta, Subhasis, Waghela, Hetvi

arXiv.org Artificial Intelligence

-- Speculative decoding has emerged as an effective approach for accelerating autoregressive inference by parallelizing token generation through a draft - then - verify paradigm. However, existing methods rely on static drafting lengths and rigid verification cri teria, limiting their adaptability across varying model uncertainties and input complexities. This paper proposes an information - theoretic framework for speculative decoding based on confidence - modulated drafting. By leveraging entropy and margin - based uncertainty measures over the drafter's output distribution, the proposed method dynamically adjusts the number of speculatively generated tokens at each iteration. This adaptive mechanism reduces rollback frequency, improves resource utilization, an d maintains output fidelity. Additionally, the verification process is modulated using the same confidence signals, enabling more flexible acceptance of drafted tokens without sacrificing generation quality. Experiments on machine translation and summariza tion tasks demonstrate significant speedups over standard speculative decoding while preserving or improving BLEU and ROUGE scores. The proposed approach offers a principled, plug - in method for efficient and robust decoding in large language models under v arying conditions of uncertainty. Keywords -- Speculative Decoding, Autoregressive Models, Confidence Estimation, Adaptive Inference, Entropy - Based Drafting, Sequence Generation, Large Language Models, Large Language Models (LLMs), Information - Theoretic Decoding. The task of sequence generation lies at the heart of numerous applications in natural language processing, including machine translation, text summarization, dialogue generation, and code synthesis. In the overwhelming majority of these applications, autor egressive (AR) decoding remains the dominant paradigm for generating sequences from a probabilistic language model [1 - 2] . Autoregressive models, particularly those based on the Transformer architecture, operate by predicting each token conditioned on the e ntire history of previously generated tokens. This left - to - right decoding strategy, though optimal in terms of likelihood estimation, suffers from a fundamental limitation: the inherently sequential nature of generation prohibits efficient parallelization, severely hindering inference throughput, especially in latency - sensitive deployment scenarios.


Predicting Road Crossing Behaviour using Pose Detection and Sequence Modelling

Dasgupta, Subhasis, Saha, Preetam, Roy, Agniva, Sen, Jaydip

arXiv.org Artificial Intelligence

The world is rapidly advancing toward a future where artificial intelligence (AI) takes a central role in many everyday activities. In business, for example, robots have become indispensable in manufacturing processes and warehouse management. These robots efficiently handle tasks such as stacking and removing items, o ptimizing various business operations. In aviation, autopilot systems have been a standard feature in airplanes for many years, enhancing flight safety and efficiency. Similarly, in many developed countries, vehicles equipped with autopilot capabilities ar e becoming increasingly common. These self - driving vehicles are designed with an array of sensors and high - resolution cameras to monitor their surroundings, detect objects, and take necessary actions to prevent collisions or accidents. While these autonomous vehicles perform admirably on highways where the primary concern is other vehicles, they face significant challenges in busy urban environments. In such settings, it is often advisable for drivers to switch from autopilot to manual c ontrol. This is particularly crucial in bustling market areas where pedestrian behaviour can be unpredictable.


Hierarchical Verification of Speculative Beams for Accelerating LLM Inference

Sen, Jaydip, Puvvala, Harshitha, Dasgupta, Subhasis

arXiv.org Artificial Intelligence

Large language models (LLMs) have achieved remarkable success across diverse natural language processing tasks but face persistent challenges in inference efficiency due to their autoregressive nature. While speculative decoding and beam sampling offer notable improvements, traditional methods verify draft sequences sequentially without prioritization, leading to unnecessary computational overhead. This work proposes the Hierarchical Verification Tree (HVT), a novel framework that restructures speculative beam decoding by prioritizing high-likelihood drafts and enabling early pruning of suboptimal candidates. Theoretical foundations and a formal verification-pruning algorithm are developed to ensure correctness and efficiency. Integration with standard LLM inference pipelines is achieved without requiring retraining or architecture modification. Experimental evaluations across multiple datasets and models demonstrate that HVT consistently outperforms existing speculative decoding schemes, achieving substantial reductions in inference time and energy consumption while maintaining or enhancing output quality. The findings highlight the potential of hierarchical verification strategies as a new direction for accelerating large language model inference.


Multi-Amateur Contrastive Decoding for Text Generation

Sen, Jaydip, Dasgupta, Subhasis, Waghela, Hetvi

arXiv.org Artificial Intelligence

Contrastive Decoding (CD) has emerged as an effective inference-time strategy for enhancing open-ended text generation by exploiting the divergence in output probabilities between a large expert language model and a smaller amateur model. Although CD improves coherence and fluency, its dependence on a single amateur restricts its capacity to capture the diverse and multifaceted failure modes of language generation, such as repetition, hallucination, and stylistic drift. This paper proposes Multi-Amateur Contrastive Decoding (MACD), a generalization of the CD framework that employs an ensemble of amateur models to more comprehensively characterize undesirable generation patterns. MACD integrates contrastive signals through both averaging and consensus penalization mechanisms and extends the plausibility constraint to operate effectively in the multi-amateur setting. Furthermore, the framework enables controllable generation by incorporating amateurs with targeted stylistic or content biases. Experimental results across multiple domains, such as news, encyclopedic, and narrative, demonstrate that MACD consistently surpasses conventional decoding methods and the original CD approach in terms of fluency, coherence, diversity, and adaptability, all without requiring additional training or fine-tuning.


Determination Of Structural Cracks Using Deep Learning Frameworks

Dasgupta, Subhasis, Sen, Jaydip, Halder, Tuhina

arXiv.org Artificial Intelligence

Structural crack detection is a critical task for public safety as it helps in preventing potential structural failures that could endanger lives. Manual detection by inexperienced personnel can be slow, inconsistent, and prone to human error, which may compromise the reliability of assessments. The current study addresses these challenges by introducing a novel deep-learning architecture designed to enhance the accuracy and efficiency of structural crack detection. In this research, various configurations of residual U-Net models were utilized. These models, due to their robustness in capturing fine details, were further integrated into an ensemble with a meta-model comprising convolutional blocks. This unique combination aimed to boost prediction efficiency beyond what individual models could achieve. The ensemble's performance was evaluated against well-established architectures such as SegNet and the traditional U-Net. Results demonstrated that the residual U-Net models outperformed their predecessors, particularly with low-resolution imagery, and the ensemble model exceeded the performance of individual models, proving it as the most effective. The assessment was based on the Intersection over Union (IoU) metric and DICE coefficient. The ensemble model achieved the highest scores, signifying superior accuracy. This advancement suggests way for more reliable automated systems in structural defects monitoring tasks.


Advancing Decoding Strategies: Enhancements in Locally Typical Sampling for LLMs

Sen, Jaydip, Sengupta, Saptarshi, Dasgupta, Subhasis

arXiv.org Artificial Intelligence

This chapter explores advancements in decoding strategies for large language models (LLMs), focusing on enhancing the Locally Typical Sampling (LTS) algorithm. Traditional decoding methods, such as top - k and nucleus sampling, often struggle to balance fluency, diversity, and coherence in text generation. To address these challenges, Adaptive Semantic - Aware Typicality Sampling (ASTS) is proposed as an improved version of LTS, incorporating dynamic entropy thresholding, multi - objective scoring, and reward - penalty adjustments. ASTS ensures contextually coherent and diverse text generation while maintaining computational efficiency. Its performance is evaluated across multiple benchmarks, including story generation and abstractive summarization, using metrics such a s perplexity, MAUVE, and diversity scores. Experimental results demonstrate that ASTS outperforms existing sampling techniques by reducing repetition, enhancing semantic alignment, and improving fluency. Keywords: Locally Typical Sampling, Adaptive Semantic - Aware Typicality Sampling (ASTS), Decoding Strategies, Large Language Models (LLMs), Entrop y - Based Sampling, Multi - Objective Scoring.


Adversarial Text Generation with Dynamic Contextual Perturbation

Waghela, Hetvi, Sen, Jaydip, Rakshit, Sneha, Dasgupta, Subhasis

arXiv.org Artificial Intelligence

Adversarial attacks on Natural Language Processing (NLP) models expose vulnerabilities by introducing subtle perturbations to input text, often leading to misclassification while maintaining human readability. Existing methods typically focus on word-level or local text segment alterations, overlooking the broader context, which results in detectable or semantically inconsistent perturbations. We propose a novel adversarial text attack scheme named Dynamic Contextual Perturbation (DCP). DCP dynamically generates context-aware perturbations across sentences, paragraphs, and documents, ensuring semantic fidelity and fluency. Leveraging the capabilities of pre-trained language models, DCP iteratively refines perturbations through an adversarial objective function that balances the dual objectives of inducing model misclassification and preserving the naturalness of the text. This comprehensive approach allows DCP to produce more sophisticated and effective adversarial examples that better mimic natural language patterns. Our experimental results, conducted on various NLP models and datasets, demonstrate the efficacy of DCP in challenging the robustness of state-of-the-art NLP systems. By integrating dynamic contextual analysis, DCP significantly enhances the subtlety and impact of adversarial attacks. This study highlights the critical role of context in adversarial attacks and lays the groundwork for creating more robust NLP systems capable of withstanding sophisticated adversarial strategies.


Context-Enhanced Contrastive Search for Improved LLM Text Generation

Sen, Jaydip, Pandey, Rohit, Waghela, Hetvi

arXiv.org Artificial Intelligence

--Recently, Large Language Models (LLMs) have demonstrated remarkable advancements in Natural Language Processing (NLP). However, generating high-quality text that balances coherence, diversity, and relevance remain s challenging. Traditional decoding methods, such as bean search and top-k sampling, often struggle with either repe titive or incoherent outputs, particularly in tasks that require long-form text generation. To address these limitations, the paper proposes a novel enhancement of the well-known Contrastive S earch algorithm, Context-Enhanced Contrastive Search (CEC S) with contextual calibration. The proposed scheme introduces several novelties including dynamic contextual importance w eighting, multi-level Contrastive Search, and adaptive temper ature control, to optimize the balance between fluency, creativity, and precision. The performance of CECS is evaluated usi ng several standard metrics such as BLEU, ROUGE, and semantic similarity. Experimental results demonstrate signif icant improvements in both coherence and relevance of the generated texts by CECS outperforming the existing Contrastive Search techniques. The proposed algorithm has several pote ntial applications in the real world including legal document drafting, customer service chatbots, and content marketing. In recent years, Large Language Models (LLMs) have transformed the field of Natural Language Processing (NLP), delivering cutting-edge performance across numerous tasks, including text generation, summarization, machine translation, and question answering. Models such as OpenAI's GPT-3 [1], Google's BERT [2], and more recently PaLM [3], have greatly enhanced the capabilities of machines in understanding and generating human language. By leveraging deep neural network architectures and training on extensive datasets, LLMs have made significant strides in pro ducing fluent and coherent text that closely resembles hum an communication. Generating text from an LLM involves more than simp ly predicting the next word in a sequence according to its probability distribution. This step, known as decod ing, plays a critical role in shaping the final output. Various decoding strategies have been proposed in the literature ranging from deterministic methods such as beam search, to stoch astic methods like top-k and nucleus sampling. While the deterministic methods choose the highest probability token at each step, their stochastic counterparts introduce randomness to improve diversity in the generated output.


Adversarial Robustness through Dynamic Ensemble Learning

Waghela, Hetvi, Sen, Jaydip, Rakshit, Sneha

arXiv.org Artificial Intelligence

Adversarial attacks pose a significant threat to the reliability of pre-trained language models (PLMs) such as GPT, BERT, RoBERTa, and T5. This paper presents Adversarial Robustness through Dynamic Ensemble Learning (ARDEL), a novel scheme designed to enhance the robustness of PLMs against such attacks. ARDEL leverages the diversity of multiple PLMs and dynamically adjusts the ensemble configuration based on input characteristics and detected adversarial patterns. Key components of ARDEL include a meta-model for dynamic weighting, an adversarial pattern detection module, and adversarial training with regularization techniques. Comprehensive evaluations using standardized datasets and various adversarial attack scenarios demonstrate that ARDEL significantly improves robustness compared to existing methods. By dynamically reconfiguring the ensemble to prioritize the most robust models for each input, ARDEL effectively reduces attack success rates and maintains higher accuracy under adversarial conditions. This work contributes to the broader goal of developing more secure and trustworthy AI systems for real-world NLP applications, offering a practical and scalable solution to enhance adversarial resilience in PLMs.


Understanding the Impact of News Articles on the Movement of Market Index: A Case on Nifty 50

Dasgupta, Subhasis, Satpati, Pratik, Choudhary, Ishika, Sen, Jaydip

arXiv.org Artificial Intelligence

In the recent past, there were several works on the prediction of stock price using different methods. Sentiment analysis of news and tweets and relating them to the movement of stock prices have already been explored. But, when we talk about the news, there can be several topics such as politics, markets, sports etc. It was observed that most of the prior analyses dealt with news or comments associated with particular stock prices only or the researchers dealt with overall sentiment scores only. However, it is quite possible that different topics having different levels of impact on the movement of the stock price or an index. The current study focused on bridging this gap by analysing the movement of Nifty 50 index with respect to the sentiments associated with news items related to various different topic such as sports, politics, markets etc. The study established that sentiment scores of news items of different other topics also have a significant impact on the movement of the index.