Meghalaya
Dual-branch Prompting for Multimodal Machine Translation
Wang, Jie, Yang, Zhendong, Zong, Liansong, Zhang, Xiaobo, Wang, Dexian, Zhang, Ji
Multimodal Machine Translation (MMT) typically enhances text-only translation by incorporating aligned visual features. Despite the remarkable progress, state-of-the-art MMT approaches often rely on paired image-text inputs at inference and are sensitive to irrelevant visual noise, which limits their robustness and practical applicability. To address these issues, we propose D2P-MMT, a diffusion-based dual-branch prompting framework for robust vision-guided translation. Specifically, D2P-MMT requires only the source text and a reconstructed image generated by a pre-trained diffusion model, which naturally filters out distracting visual details while preserving semantic cues. During training, the model jointly learns from both authentic and reconstructed images using a dual-branch prompting strategy, encouraging rich cross-modal interactions. To bridge the modality gap and mitigate training-inference discrepancies, we introduce a distributional alignment loss that enforces consistency between the output distributions of the two branches. Extensive experiments on the Multi30K dataset demonstrate that D2P-MMT achieves superior translation performance compared to existing state-of-the-art approaches.
- Asia > China > Sichuan Province > Chengdu (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- (5 more...)
IndicVisionBench: Benchmarking Cultural and Multilingual Understanding in VLMs
Faraz, Ali, Akash, null, Khan, Shaharukh, Kolla, Raja, Patidar, Akshat, Goswami, Suranjan, Ravi, Abhinav, Khatri, Chandra, Agarwal, Shubham
Vision-language models (VLMs) have demonstrated impressive generalization across multimodal tasks, yet most evaluation benchmarks remain Western-centric, leaving open questions about their performance in culturally diverse and multilingual settings. To address this gap, we introduce IndicVisionBench, the first large-scale benchmark centered on the Indian subcontinent. Our final benchmark consists of a total of 5K images and 37K+ QA pairs across 13 culturally grounded topics. In addition, we release a paired parallel corpus of annotations across 10 Indic languages, creating a unique resource for analyzing cultural and linguistic biases in VLMs. We evaluate a broad spectrum of 8 models, from proprietary closed-source systems to open-weights medium and large-scale models. Our experiments reveal substantial performance gaps, underscoring the limitations of current VLMs in culturally diverse contexts. By centering cultural diversity and multilinguality, IndicVisionBench establishes a reproducible evaluation framework that paves the way for more inclusive multimodal research. Vision-language models (VLMs) (Bai et al., 2023; Chen et al., 2024; Lu et al., 2024; Wang et al., 2024b; Laurenc on et al., 2024; Tong et al., 2024; Xue et al., 2024) have demonstrated strong performance across a variety of multimodal tasks. However, existing benchmarks (Antol et al., 2015; Fu et al., 2023; Goyal et al., 2017) remain heavily Western-centric, limiting our understanding of how these models generalize to culturally diverse and multilingual settings. While some recent efforts partially cover this diversity (Romero et al., 2024; Nayak et al., 2024; V ayani et al., 2025), a systematic, large-scale benchmark capturing India-specific cultural concepts across multiple languages is still lacking. To address this gap, we introduce IndicVisionBench, a culturally grounded evaluation benchmark tailored for the Indian subcontinent. To the best of our knowledge, this is the first large-scale benchmark explicitly designed to assess VLMs in the context of Indian culture and languages. We use states as a proxy for cultural groups following prior works (Adilazuarda et al., 2024; Nayak et al., 2024).
- Europe > Denmark > Capital Region > Copenhagen (0.04)
- Asia > India > Tamil Nadu (0.04)
- Asia > India > Nagaland (0.04)
- (25 more...)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.99)
Concise and Sufficient Sub-Sentence Citations for Retrieval-Augmented Generation
Chen, Guo, Li, Qiuyuan, Li, Qiuxian, Dai, Hongliang, Chen, Xiang, Li, Piji
In retrieval-augmented generation (RAG) question answering systems, generating citations for large language model (LLM) outputs enhances verifiability and helps users identify potential hallucinations. However, we observe two problems in the citations produced by existing attribution methods. First, the citations are typically provided at the sentence or even paragraph level. Long sentences or paragraphs may include a substantial amount of irrelevant content. Second, sentence-level citations may omit information that is essential for verifying the output, forcing users to read the surrounding context. In this paper, we propose generating sub-sentence citations that are both concise and sufficient, thereby reducing the effort required by users to confirm the correctness of the generated output. To this end, we first develop annotation guidelines for such citations and construct a corresponding dataset. Then, we propose an attribution framework for generating citations that adhere to our standards. This framework leverages LLMs to automatically generate fine-tuning data for our task and employs a credit model to filter out low-quality examples. Our experiments on the constructed dataset demonstrate that the propose approach can generate high-quality and more readable citations.
- North America > Guatemala (0.14)
- Asia > India > Assam > Dispur (0.06)
- Asia > China > Jiangsu Province > Nanjing (0.05)
- (2 more...)
ReSum: Unlocking Long-Horizon Search Intelligence via Context Summarization
Wu, Xixi, Li, Kuan, Zhao, Yida, Zhang, Liwen, Ou, Litu, Yin, Huifeng, Zhang, Zhongwang, Yu, Xinmiao, Zhang, Dingchu, Jiang, Yong, Xie, Pengjun, Huang, Fei, Cheng, Minhao, Wang, Shuai, Cheng, Hong, Zhou, Jingren
Large Language Model (LLM)-based web agents demonstrate strong performance on knowledge-intensive tasks but are hindered by context window limitations in paradigms like ReAct. Complex queries involving multiple entities, intertwined relationships, and high uncertainty demand extensive search cycles that rapidly exhaust context budgets before reaching solutions. To overcome this challenge, we introduce ReSum, a novel paradigm that enables indefinite exploration through periodic context summarization. ReSum converts growing interaction histories into compact reasoning states, maintaining awareness of prior discoveries while bypassing context constraints. For paradigm adaptation, we propose ReSum-GRPO, integrating GRPO with segmented trajectory training and advantage broadcasting to familiarize agents with summary-conditioned reasoning. Extensive experiments on web agents across three benchmarks demonstrate that ReSum delivers an average absolute improvement of 4.5% over ReAct, with further gains of 8.2% following ReSum-GRPO training. Notably, with only 1K training samples, our WebResummer-30B (a ReSum-GRPO-trained version of WebSailor-30B) achieves 33.3% Pass@1 on BrowseComp-zh and 18.3% on BrowseComp-en, surpassing most open-source web agents.
- Asia > India > Meghalaya (0.05)
- Asia > China > Anhui Province (0.04)
- South America > Brazil (0.04)
- (4 more...)
- Overview (1.00)
- Research Report (0.64)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
MAUSAM: An Observations-focused assessment of Global AI Weather Prediction Models During the South Asian Monsoon
Gupta, Aman, Sheshadri, Aditi, Suri, Dhruv
Accurate weather forecasts are critical for societal planning and disaster preparedness. Yet these forecasts remain challenging to produce and evaluate, especially in regions with sparse observational coverage. Current evaluation of artificial intelligence (AI) weather prediction relies primarily on reanalyses, which can obscure important deficiencies. Here we present MAUSAM (Measuring AI Uncertainty during South Asian Monsoon), an evaluation of seven leading AI-based forecasting systems - FourCastNet, FourCastNet-SFNO, Pangu-Weather, GraphCast, Aurora, AIFS, and GenCast - during the South Asian Monsoon, using ground-based weather stations, rain gauge networks, and geostationary satellite imagery. The AI models demonstrate impressive forecast skill during monsoon across a broad range of variables, ranging from large-scale surface temperature and winds to precipitation, cloud cover, and subseasonal to seasonal eddy statistics, highlighting the strength of data-driven weather prediction. However, the models still exhibit systematic errors at finer scales like the underprediction of extreme precipitation, divergent cyclone tracks, and the mesoscale kinetic energy spectra, highlighting avenues for future improvement. A comparison against observations reveals forecast errors up to 15-45% larger than those relative to reanalysis and traditional forecasts, indicating that reanalysis-centric benchmarks can overstate forecast skill. Of the models assessed, AIFS achieves the most consistent representation of atmospheric variables, with GraphCast and GenCast also showing strong skill. The analysis presents a framework for evaluating AI weather models on regional prediction and highlights both the promise and current limitations of AI weather prediction in data-sparse regions, underscoring the importance of observational evaluation for future operational adoption.
- Asia > India > Maharashtra (0.05)
- Indian Ocean > Bay of Bengal (0.04)
- Indian Ocean > Arabian Sea (0.04)
- (12 more...)
- Energy > Renewable > Solar (1.00)
- Energy > Power Industry (0.93)
- Energy > Renewable > Wind (0.93)
- Energy > Renewable > Geothermal > Geothermal Energy Exploration and Development > Geophysical Analysis & Survey (0.34)
DRISHTIKON: A Multimodal Multilingual Benchmark for Testing Language Models' Understanding on Indian Culture
Maji, Arijit, Kumar, Raghvendra, Ghosh, Akash, Anushka, null, Shah, Nemil, Borah, Abhilekh, Shah, Vanshika, Mishra, Nishant, Saha, Sriparna
We introduce DRISHTIKON, a first-of-its-kind multimodal and multilingual benchmark centered exclusively on Indian culture, designed to evaluate the cultural understanding of generative AI systems. Unlike existing benchmarks with a generic or global scope, DRISHTIKON offers deep, fine-grained coverage across India's diverse regions, spanning 15 languages, covering all states and union territories, and incorporating over 64,000 aligned text-image pairs. The dataset captures rich cultural themes including festivals, attire, cuisines, art forms, and historical heritage amongst many more. We evaluate a wide range of vision-language models (VLMs), including open-source small and large models, proprietary systems, reasoning-specialized VLMs, and Indic-focused models, across zero-shot and chain-of-thought settings. Our results expose key limitations in current models' ability to reason over culturally grounded, multimodal inputs, particularly for low-resource languages and less-documented traditions. DRISHTIKON fills a vital gap in inclusive AI research, offering a robust testbed to advance culturally aware, multimodally competent language technologies.
- Europe > Austria > Vienna (0.14)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > India > Jharkhand (0.04)
- (24 more...)
- Overview (1.00)
- Research Report > New Finding (0.48)
Better To Ask in English? Evaluating Factual Accuracy of Multilingual LLMs in English and Low-Resource Languages
Rohera, Pritika, Ginimav, Chaitrali, Sawant, Gayatri, Joshi, Raviraj
Multilingual Large Language Models (LLMs) have demonstrated significant effectiveness across various languages, particularly in high-resource languages such as English. However, their performance in terms of factual accuracy across other low-resource languages, especially Indic languages, remains an area of investigation. In this study, we assess the factual accuracy of LLMs - GPT-4o, Gemma-2-9B, Gemma-2-2B, and Llama-3.1-8B - by comparing their performance in English and Indic languages using the IndicQuest dataset, which contains question-answer pairs in English and 19 Indic languages. By asking the same questions in English and their respective Indic translations, we analyze whether the models are more reliable for regional context questions in Indic languages or when operating in English. Our findings reveal that LLMs often perform better in English, even for questions rooted in Indic contexts. Notably, we observe a higher tendency for hallucination in responses generated in low-resource Indic languages, highlighting challenges in the multilingual understanding capabilities of current LLMs.
- Asia > Thailand > Bangkok > Bangkok (0.04)
- Asia > India > West Bengal (0.04)
- Asia > India > Uttarakhand (0.04)
- (21 more...)
Wavelet-SARIMA-Transformer: A Hybrid Model for Rainfall Forecasting
Saikia, Junmoni, Goswami, Kuldeep, Kakaty, Sarat C.
This study develops and evaluates a novel hybridWavelet SARIMA Transformer, WST framework to forecast using monthly rainfall across five meteorological subdivisions of Northeast India over the 1971 to 2023 period. The approach employs the Maximal Overlap Discrete Wavelet Transform, MODWT with four wavelet families such as, Haar, Daubechies, Symlet, Coiflet etc. to achieve shift invariant, multiresolution decomposition of the rainfall series. Linear and seasonal components are modeled using Seasonal ARIMA, SARIMA, while nonlinear components are modeled by a Transformer network, and forecasts are reconstructed via inverse MODWT. Comprehensive validation using an 80 is to 20 train test split and multiple performance indices such as, RMSE, MAE, SMAPE, Willmotts d, Skill Score, Percent Bias, Explained Variance, and Legates McCabes E1 demonstrates the superiority of the Haar-based hybrid model, WHST. Across all subdivisions, WHST consistently achieved lower forecast errors, stronger agreement with observed rainfall, and unbiased predictions compared with stand alone SARIMA, stand-alone Transformer, and two-stage wavelet hybrids. Residual adequacy was confirmed through the Ljung Box test, while Taylor diagrams provided an integrated assessment of correlation, variance fidelity, and RMSE, further reinforcing the robustness of the proposed approach. The results highlight the effectiveness of integrating multiresolution signal decomposition with complementary linear and deep learning models for hydroclimatic forecasting. Beyond rainfall, the proposed WST framework offers a scalable methodology for forecasting complex environmental time series, with direct implications for flood risk management, water resources planning, and climate adaptation strategies in data-sparse and climate-sensitive regions.
- North America > Trinidad and Tobago > Trinidad > Arima > Arima (0.25)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- Asia > India > West Bengal (0.05)
- (12 more...)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (0.94)
- Information Technology > Data Science > Data Quality > Data Transformation (0.87)
Farm-Level, In-Season Crop Identification for India
Deshpande, Ishan, Reehal, Amandeep Kaur, Nath, Chandan, Singh, Renu, Patel, Aayush, Jayagopal, Aishwarya, Singh, Gaurav, Aggarwal, Gaurav, Agarwal, Amit, Bele, Prathmesh, Reddy, Sridhar, Warrier, Tanya, Singh, Kinjal, Tendulkar, Ashish, Outon, Luis Pazos, Saxena, Nikita, Dondzik, Agata, Tewari, Dinesh, Garg, Shruti, Singh, Avneet, Dhand, Harsh, Rajan, Vaibhav, Talekar, Alok
Accurate, timely, and farm-level crop type information is paramount for national food security, agricultural policy formulation, and economic planning, particularly in agriculturally significant nations like India. While remote sensing and machine learning have become vital tools for crop monitoring, existing approaches often grapple with challenges such as limited geographical scalability, restricted crop type coverage, the complexities of mixed-pixel and heterogeneous landscapes, and crucially, the robust in-season identification essential for proactive decision-making. We present a framework designed to address the critical data gaps for targeted data driven decision making which generates farm-level, in-season, multi-crop identification at national scale (India) using deep learning. Our methodology leverages the strengths of Sentinel-1 and Sentinel-2 satellite imagery, integrated with national-scale farm boundary data. The model successfully identifies 12 major crops (which collectively account for nearly 90% of India's total cultivated area showing an agreement with national crop census 2023-24 of 94% in winter, and 75% in monsoon season). Our approach incorporates an automated season detection algorithm, which estimates crop sowing and harvest periods. This allows for reliable crop identification as early as two months into the growing season and facilitates rigorous in-season performance evaluation. Furthermore, we have engineered a highly scalable inference pipeline, culminating in what is, to our knowledge, the first pan-India, in-season, farm-level crop type data product. The system's effectiveness and scalability are demonstrated through robust validation against national agricultural statistics, showcasing its potential to deliver actionable, data-driven insights for transformative agricultural monitoring and management across India.
- North America > United States (0.28)
- North America > Canada > Ontario (0.14)
- South America > Peru (0.04)
- (16 more...)
FairI Tales: Evaluation of Fairness in Indian Contexts with a Focus on Bias and Stereotypes
Nawale, Janki Atul, Khan, Mohammed Safi Ur Rahman, D, Janani, Gupta, Mansi, Pruthi, Danish, Khapra, Mitesh M.
Existing studies on fairness are largely Western-focused, making them inadequate for culturally diverse countries such as India. To address this gap, we introduce INDIC-BIAS, a comprehensive India-centric benchmark designed to evaluate fairness of LLMs across 85 identity groups encompassing diverse castes, religions, regions, and tribes. We first consult domain experts to curate over 1,800 socio-cultural topics spanning behaviors and situations, where biases and stereotypes are likely to emerge. Grounded in these topics, we generate and manually validate 20,000 real-world scenario templates to probe LLMs for fairness. We structure these templates into three evaluation tasks: plausibility, judgment, and generation. Our evaluation of 14 popular LLMs on these tasks reveals strong negative biases against marginalized identities, with models frequently reinforcing common stereotypes. Additionally, we find that models struggle to mitigate bias even when explicitly asked to rationalize their decision. Our evaluation provides evidence of both allocative and representational harms that current LLMs could cause towards Indian identities, calling for a more cautious usage in practical applications. We release INDIC-BIAS as an open-source benchmark to advance research on benchmarking and mitigating biases and stereotypes in the Indian context.
- Asia > India > Bihar (0.14)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- Asia > India > Uttar Pradesh (0.04)
- (37 more...)