Goto

Collaborating Authors

 Gujarat


The New Movie From the Creator of em Succession /em Is Less a Satire Than a Documentary

Slate

For the quartet of tech billionaires in Jesse Armstrong's Mountainhead, ideas are so powerful that nothing else seems real. Holed up in a resplendent snowy retreat built by meditation-app developer Hugo Van Yalk (Jason Schwartzman), they're glued to their phones as the outside world is erupting into chaos, thanks in no small part to the wildfire spread of A.I. deepfakes on the social media platform owned by the world's richest man, Venis Parish (Cory Michael Smith). People in Gujarat are being burned alive after being falsely accused of desecrating religious symbols, and Midwestern Americans are machine-gunning each other over minor disagreements, but for these four men, the widespread devastation is in some ways proof of concept that they're as important as they believe themselves to be. And besides, those bodies going up in flames are just images on a tiny screen, so distant they might as well be theoretical. As he trudges through the snow with Randall (Steve Carell), the venture capitalist who serves as the group's self-appointed philosopher king, Venis asks him, "Do you โ€ฆ believe in other people?"


Robot DOG makes an appearance at the Met Gala - dressed in a tuxedo and adorned with a 1,000-carat diamond leash

Daily Mail - Science & tech

At New York's Met Gala, guests are known for attention-grabbing outfits, from Katy Perry's human chandelier dress to Kim Kardashian's all-black body suit. But one attendant in particular has stolen the limelight this year โ€“ and he's not even human. Indian-American entrepreneur Mona Patel rocked up to the annual event on Monday night with an adorable robotic dachshund in tow. Vector the robo-dog, developed by scientists at MIT, has a 1,000-carat diamond-studded leash and his own cute little specially-fitted tuxedo. Powered by AI and equipped with sensors, Vector has customised movement patterns and'just the right amount of sass', Vogue India reports.


COMI-LINGUA: Expert Annotated Large-Scale Dataset for Multitask NLP in Hindi-English Code-Mixing

arXiv.org Artificial Intelligence

The rapid growth of digital communication has driven the widespread use of code-mixing, particularly Hindi-English, in multilingual communities. Existing datasets often focus on romanized text, have limited scope, or rely on synthetic data, which fails to capture realworld language nuances. Human annotations are crucial for assessing the naturalness and acceptability of code-mixed text. To address these challenges, We introduce COMI-LINGUA, the largest manually annotated dataset for code-mixed text, comprising 100,970 instances evaluated by three expert annotators in both Devanagari and Roman scripts. The dataset supports five fundamental NLP tasks: Language Identification, Matrix Language Identification, Part-of-Speech Tagging, Named Entity Recognition, and Translation. We evaluate LLMs on these tasks using COMILINGUA, revealing limitations in current multilingual modeling strategies and emphasizing the need for improved code-mixed text processing capabilities. COMI-LINGUA is publically availabe at: https://huggingface.co/datasets/LingoIITGN/COMI-LINGUA.


Space to Policy: Scalable Brick Kiln Detection and Automatic Compliance Monitoring with Geospatial Data

arXiv.org Artificial Intelligence

Air pollution kills 7 million people annually. The brick kiln sector significantly contributes to economic development but also accounts for 8-14\% of air pollution in India. Policymakers have implemented compliance measures to regulate brick kilns. Emission inventories are critical for air quality modeling and source apportionment studies. However, the largely unorganized nature of the brick kiln sector necessitates labor-intensive survey efforts for monitoring. Recent efforts by air quality researchers have relied on manual annotation of brick kilns using satellite imagery to build emission inventories, but this approach lacks scalability. Machine-learning-based object detection methods have shown promise for detecting brick kilns; however, previous studies often rely on costly high-resolution imagery and fail to integrate with governmental policies. In this work, we developed a scalable machine-learning pipeline that detected and classified 30638 brick kilns across five states in the Indo-Gangetic Plain using free, moderate-resolution satellite imagery from Planet Labs. Our detections have a high correlation with on-ground surveys. We performed automated compliance analysis based on government policies. In the Delhi airshed, stricter policy enforcement has led to the adoption of efficient brick kiln technologies. This study highlights the need for inclusive policies that balance environmental sustainability with the livelihoods of workers.


$C^{3}$-NeRF: Modeling Multiple Scenes via Conditional-cum-Continual Neural Radiance Fields

arXiv.org Artificial Intelligence

Neural radiance fields (NeRF) have exhibited highly photorealistic rendering of novel views through per-scene optimization over a single 3D scene. With the growing popularity of NeRF and its variants, they have become ubiquitous and have been identified as efficient 3D resources. However, they are still far from being scalable since a separate model needs to be stored for each scene, and the training time increases linearly with every newly added scene. Surprisingly, the idea of encoding multiple 3D scenes into a single NeRF model is heavily under-explored. In this work, we propose a novel conditional-cum-continual framework, called $C^{3}$-NeRF, to accommodate multiple scenes into the parameters of a single neural radiance field. Unlike conventional approaches that leverage feature extractors and pre-trained priors for scene conditioning, we use simple pseudo-scene labels to model multiple scenes in NeRF. Interestingly, we observe the framework is also inherently continual (via generative replay) with minimal, if not no, forgetting of the previously learned scenes. Consequently, the proposed framework adapts to multiple new scenes without necessarily accessing the old data. Through extensive qualitative and quantitative evaluation using synthetic and real datasets, we demonstrate the inherent capacity of the NeRF model to accommodate multiple scenes with high-quality novel-view renderings without adding additional parameters. We provide implementation details and dynamic visualizations of our results in the supplementary file.


Real Time Monitoring and Forecasting of COVID 19 Cases using an Adjusted Holt based Hybrid Model embedded with Wavelet based ANN

arXiv.org Machine Learning

Since the inception of the SARS - CoV - 2 (COVID - 19) novel coronavirus, a lot of time and effort is being allocated to estimate the trajectory and possibly, forecast with a reasonable degree of accuracy, the number of cases, recoveries, and deaths due to the same. The model proposed in this paper is a mindful step in the same direction. The primary model in question is a Hybrid Holt's Model embedded with a Wavelet-based ANN. To test its forecasting ability, we have compared three separate models, the first, being a simple ARIMA model, the second, also an ARIMA model with a wavelet-based function, and the third, being the proposed model. We have also compared the forecast accuracy of this model with that of a modern day Vanilla LSTM recurrent neural network model. We have tested the proposed model on the number of confirmed cases (daily) for the entire country as well as 6 hotspot states. We have also proposed a simple adjustment algorithm in addition to the hybrid model so that daily and/or weekly forecasts can be meted out, with respect to the entirety of the country, as well as a moving window performance metric based on out-of-sample forecasts. In order to have a more rounded approach to the analysis of COVID-19 dynamics, focus has also been given to the estimation of the Basic Reproduction Number, $R_0$ using a compartmental epidemiological model (SIR). Lastly, we have also given substantial attention to estimating the shelf-life of the proposed model. It is obvious yet noteworthy how an accurate model, in this regard, can ensure better allocation of healthcare resources, as well as, enable the government to take necessary measures ahead of time.


Evaluating Fast Adaptability of Neural Networks for Brain-Computer Interface

arXiv.org Artificial Intelligence

Electroencephalography (EEG) classification is a versatile and portable technique for building non-invasive Brain-computer Interfaces (BCI). However, the classifiers that decode cognitive states from EEG brain data perform poorly when tested on newer domains, such as tasks or individuals absent during model training. Researchers have recently used complex strategies like Model-agnostic meta-learning (MAML) for domain adaptation. Nevertheless, there is a need for an evaluation strategy to evaluate the fast adaptability of the models, as this characteristic is essential for real-life BCI applications for quick calibration. We used motor movement and imaginary signals as input to Convolutional Neural Networks (CNN) based classifier for the experiments. Datasets with EEG signals typically have fewer examples and higher time resolution. Even though batch-normalization is preferred for Convolutional Neural Networks (CNN), we empirically show that layer-normalization can improve the adaptability of CNN-based EEG classifiers with not more than ten fine-tuning steps. In summary, the present work (i) proposes a simple strategy to evaluate fast adaptability, and (ii) empirically demonstrate fast adaptability across individuals as well as across tasks with simple transfer learning as compared to MAML approach.


VLSI Architectures of Forward Kinematic Processor for Robotics Applications

arXiv.org Artificial Intelligence

This paper aims to get a comprehensive review of current-day robotic computation technologies at VLSI architecture level. We studied several repots in the domain of robotic processor architecture. In this work, we focused on the forward kinematics architectures which consider CORDIC algorithms, VLSI circuits of WE DSP16 chip, parallel processing and pipelined architecture, and lookup table formula and FPGA processor. This study gives us an understanding of different implementation methods for forward kinematics. Our goal is to develop a forward kinematics processor with FPGA for real-time applications, requires a fast response time and low latency of these devices, useful for industrial automation where the processing speed plays a great role.


FESS Loss: Feature-Enhanced Spatial Segmentation Loss for Optimizing Medical Image Analysis

arXiv.org Artificial Intelligence

Medical image segmentation is a critical process in the field of medical imaging, playing a pivotal role in diagnosis, treatment, and research. It involves partitioning of an image into multiple regions, representing distinct anatomical or pathological structures. Conventional methods often grapple with the challenge of balancing spatial precision and comprehensive feature representation due to their reliance on traditional loss functions. To overcome this, we propose Feature-Enhanced Spatial Segmentation Loss (FESS Loss), that integrates the benefits of contrastive learning (which extracts intricate features, particularly in the nuanced domain of medical imaging) with the spatial accuracy inherent in the Dice loss. The objective is to augment both spatial precision and feature-based representation in the segmentation of medical images. FESS Loss signifies a notable advancement, offering a more accurate and refined segmentation process, ultimately contributing to heightened precision in the analysis of medical images. Further, FESS loss demonstrates superior performance in limited annotated data availability scenarios often present in the medical domain.


Boldly Going Where No Benchmark Has Gone Before: Exposing Bias and Shortcomings in Code Generation Evaluation

arXiv.org Artificial Intelligence

Motivated by the increasing popularity of code generation from human descriptions using large language models (LLMs), several benchmarks have been proposed to assess the capabilities of existing and emerging models. This study presents a large-scale human evaluation of HumanEval and MBPP, two widely used benchmarks for Python code generation, focusing on their diversity and difficulty. Our findings reveal a significant bias towards a limited number of programming concepts, with negligible or no representation of most concepts. Additionally, we identify a concerningly high proportion of easy programming questions, potentially leading to an overestimation of model performance on code generation tasks.