Goto

Collaborating Authors

 Kowsher, Md.


Token Trails: Navigating Contextual Depths in Conversational AI with ChatLLM

arXiv.org Artificial Intelligence

Conversational modeling using Large Language Models (LLMs) requires a nuanced understanding of context to generate coherent and contextually relevant responses. In this paper, we present Token Trails, a novel approach that leverages token-type embeddings to navigate the intricate contextual nuances within conversations. Our framework utilizes token-type embeddings to distinguish between user utterances and bot responses, facilitating the generation of context-aware replies. Through comprehensive experimentation and evaluation, we demonstrate the effectiveness of Token Trails in improving conversational understanding and response generation, achieving state-of-the-art performance. Our results highlight the significance of contextual modeling in conversational AI and underscore the promising potential of Token Trails to advance the field, paving the way for more sophisticated and contextually aware chatbot interactions.


L-TUNING: Synchronized Label Tuning for Prompt and Prefix in LLMs

arXiv.org Artificial Intelligence

Efficiently fine-tuning Large Language Models (LLMs) for specific tasks presents a considerable challenge in natural language processing. Traditional methods, like prompt or prefix tuning, typically rely on arbitrary tokens for training, leading to prolonged training times and generalized token use across various class labels. To address these issues, this paper introduces L-Tuning, an efficient fine-tuning approach designed for classification tasks within the Natural Language Inference (NLI) framework. Diverging from conventional methods, L-Tuning focuses on the fine-tuning of label tokens processed through a pre-trained LLM, thereby harnessing its pre-existing semantic knowledge. This technique not only improves the fine-tuning accuracy and efficiency but also facilitates the generation of distinct label embeddings for each class, enhancing the model's training nuance. Our experimental results indicate a significant improvement in training efficiency and classification accuracy with L-Tuning compared to traditional approaches, marking a promising advancement in fine-tuning LLMs for complex language tasks. \\ Code is available at: \textcolor{red}{\href{https://github.com/Kowsher/L-Tuning}{\texttt{https://github.com/Kowsher/L-Tuning}}}.


Prognosis and Treatment Prediction of Type-2 Diabetes Using Deep Neural Network and Machine Learning Classifiers

arXiv.org Artificial Intelligence

Type 2 Diabetes is a fast-growing, chronic metabolic disorder due to imbalanced insulin activity.The motion of this research is a comparative study of seven machine learning classifiers and an artificial neural network method to prognosticate the detection and treatment of diabetes with high accuracy,in order to identify and treat diabetes patients at an early age.Our training and test dataset is an accumulation of 9483 diabetes patients information.The training dataset is large enough to negate overfitting and provide for highly accurate test performance.We use performance measures such as accuracy and precision to find out the best algorithm deep ANN which outperforms with 95.14% accuracy among all other tested machine learning classifiers.We hope our high-performing model can be used by hospitals to predict diabetes and drive research into more accurate prediction models.