Goto

Collaborating Authors

 Kashyap, Gautam Siddharth


From Text to Transformation: A Comprehensive Review of Large Language Models' Versatility

arXiv.org Artificial Intelligence

This groundbreaking study explores the expanse of Large Language Models (LLMs), such as Generative Pre-Trained Transformer (GPT) and Bidirectional Encoder Representations from Transformers (BERT) across varied domains ranging from technology, finance, healthcare to education. Despite their established prowess in Natural Language Processing (NLP), these LLMs have not been systematically examined for their impact on domains such as fitness, and holistic well-being, urban planning, climate modelling as well as disaster management. This review paper, in addition to furnishing a comprehensive analysis of the vast expanse and extent of LLMs' utility in diverse domains, recognizes the research gaps and realms where the potential of LLMs is yet to be harnessed. This study uncovers innovative ways in which LLMs can leave a mark in the fields like fitness and wellbeing, urban planning, climate modelling and disaster response which could inspire future researches and applications in the said avenues.


How Paralingual are Paralinguistic Representations? A Case Study in Speech Emotion Recognition

arXiv.org Artificial Intelligence

Pre-trained Models (PTMs) have facilitated substantial progress in the field of Speech Emotion Recognition (SER). SER is an area with applications ranging from HumanComputer Interaction to Healthcare. Recent studies have leveraged various PTM representations as input features for downstream models for SER. PTM specifically pre-trained for paralinguistic tasks have obtained state-of-the-art (SOTA) performance for SER. However, such PTM haven't been evaluated for SER in multilingual settings and experimented only with English. So, we fill this gap, by performing a comprehensive comparative study of five PTMs (TRILLsson, wav2vec2, XLS-R, x-vector, Whisper) for assessing the effectiveness of paralingual PTM (TRILLsson) for SER across multiple languages. Representations from TRILLsson achieved the best performance among all the PTMs. This demonstrates that TRILLsson is able to effectively capture the various paralinguistic features from speech data for better SER. We also show that downstream models using TRILLsson representations achieve SOTA performance in terms of accuracy across various multi-lingual datasets.


Detection of a facemask in real-time using deep learning methods: Prevention of Covid 19

arXiv.org Artificial Intelligence

A health crisis is raging all over the world with the rapid transmission of the novel-coronavirus disease (Covid-19). Out of the guidelines issued by the World Health Organisation (WHO) to protect us against Covid-19, wearing a facemask is the most effective. Many countries have necessitated the wearing of face masks, but monitoring a large number of people to ensure that they are wearing masks in a crowded place is a challenging task in itself. The novel-coronavirus disease (Covid-19) has already affected our day-to-day life as well as world trade movements. By the end of April 2021, the world has recorded 144,358,956 confirmed cases of novel-coronavirus disease (Covid-19) including 3,066,113 deaths according to the world health organization (WHO). These increasing numbers motivate automated techniques for the detection of a facemask in real-time scenarios for the prevention of Covid-19. We propose a technique using deep learning that works for single and multiple people in a frame recorded via webcam in still or in motion. We have also experimented with our approach in night light. The accuracy of our model is good compared to the other approaches in the literature; ranging from 74% for multiple people in a nightlight to 99% for a single person in daylight.


MLOps: A Review

arXiv.org Artificial Intelligence

Recently, Machine Learning (ML) has become a widely accepted method for significant progress that is rapidly evolving. Since it employs computational methods to teach machines and produce acceptable answers. The significance of the Machine Learning Operations (MLOps) methods, which can provide acceptable answers for such problems, is examined in this study. To assist in the creation of software that is simple to use, the authors research MLOps methods. To choose the best tool structure for certain projects, the authors also assess the features and operability of various MLOps methods. A total of 22 papers were assessed that attempted to apply the MLOps idea. Finally, the authors admit the scarcity of fully effective MLOps methods based on which advancements can self-regulate by limiting human engagement.