Rajasthan
- North America > United States > Maine > Cumberland County > Standish (0.14)
- North America > United States > California (0.05)
- Asia > India > Rajasthan (0.04)
- (9 more...)
- Health & Medicine (1.00)
- Education (0.93)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > India > Rajasthan (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.67)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Reinforcement Learning (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Modeling & Simulation (0.67)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
- North America > United States > Massachusetts (0.04)
- Asia > India > Rajasthan (0.04)
- Research Report > Experimental Study (1.00)
- Research Report > Strength High (0.68)
ExaCraft: Dynamic Learning Context Adaptation for Personalized Educational Examples
Chatterjee, Akaash, Kundu, Suman
Learning is most effective when it's connected to relevant, relatable examples that resonate with learners on a personal level. However, existing educational AI tools don't focus on generating examples or adapting to learners' changing understanding, struggles, or growing skills. We've developed ExaCraft, an AI system that generates personalized examples by adapting to the learner's dynamic context. Through the Google Gemini AI and Python Flask API, accessible via a Chrome extension, ExaCraft combines user-defined profiles (including location, education, profession, and complexity preferences) with real-time analysis of learner behavior. This ensures examples are both culturally relevant and tailored to individual learning needs. The system's core innovation is its ability to adapt to five key aspects of the learning context: indicators of struggle, mastery patterns, topic progression history, session boundaries, and learning progression signals. Our demonstration will show how ExaCraft's examples evolve from basic concepts to advanced technical implementations, responding to topic repetition, regeneration requests, and topic progression patterns in different use cases.
- Asia > India > Maharashtra > Pune (0.06)
- Asia > India > Maharashtra > Mumbai (0.05)
- North America > Canada > Ontario > Toronto (0.05)
- (3 more...)
- Education > Educational Technology > Educational Software > Computer Based Training (1.00)
- Education > Educational Setting > Online (0.94)
Learning to Code with Context: A Study-Based Approach
Borghoff, Uwe M., Minas, Mark, Schopp, Jannis
The rapid emergence of generative AI tools is transforming the way software is developed. Consequently, software engineering education must adapt to ensure that students not only learn traditional development methods but also understand how to meaningfully and responsibly use these new technologies. In particular, project-based courses offer an effective environment to explore and evaluate the integration of AI assistance into real-world development practices. This paper presents our approach and a user study conducted within a university programming project in which students collaboratively developed computer games. The study investigates how participants used generative AI tools throughout different phases of the software development process, identifies the types of tasks where such tools were most effective, and analyzes the challenges students encountered. Building on these insights, we further examine a repository-aware, locally deployed large language model (LLM) assistant designed to provide project-contextualized support. The system employs Retrieval-Augmented Generation (RAG) to ground responses in relevant documentation and source code, enabling qualitative analysis of model behavior, parameter sensitivity, and common failure modes. The findings deepen our understanding of context-aware AI support in educational software projects and inform future integration of AI-based assistance into software engineering curricula.
- North America > United States > Florida > Miami-Dade County > Miami (0.04)
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Oceania > Australia > Victoria > Melbourne (0.04)
- (9 more...)
- Research Report > New Finding (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Overview (0.92)
- (2 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.69)
See, Think, Learn: A Self-Taught Multimodal Reasoner
Sharma, Sourabh, Gupta, Sonam, Sadbhawna, null
Vision-Language Models (VLMs) have achieved remarkable progress in integrating visual perception with language understanding. However, effective multimodal reasoning requires both accurate perception and robust reasoning, and weakness in either limits the performance of VLMs. Prior efforts to enhance reasoning often depend on high-quality chain-of-thought (CoT) data, obtained via labor-intensive human annotations, costly proprietary models, or self-training methods that overlook perception. To address these limitations, we propose a simple yet effective self-training framework called See-Think-Learn (STL). At its core, STL introduces a structured reasoning template that encourages the model to see before thinking, first extracting visual attributes in textual form, then using them to guide reasoning. The framework jointly improves perception and reasoning by having the model generate and learn from its own structured rationales in a self-training loop. Furthermore, we augment the training data with negative rationales, i.e. explanations that justify why certain answer choices are incorrect, to enhance the model's ability to distinguish between correct and misleading responses. This fosters more discriminative and robust learning. Experiments across diverse domains show that STL consistently outperforms baselines trained directly only on answers or self-generated reasoning, while qualitative analysis confirms the high quality of its rationales. STL thus provides a cost-effective solution to enhance multimodal reasoning ability of VLMs.
RecruitView: A Multimodal Dataset for Predicting Personality and Interview Performance for Human Resources Applications
Gupta, Amit Kumar, Sheth, Farhan, Shaikh, Hammad, Kumar, Dheeraj, Puniya, Angkul, Panwar, Deepak, Chaurasia, Sandeep, Mathur, Priya
Automated personality and soft skill assessment from multimodal behavioral data remains challenging due to limited datasets and methods that fail to capture geometric structure inherent in human traits. We introduce RecruitView, a dataset of 2,011 naturalistic video interview clips from 300+ participants with 27,000 pairwise comparative judgments across 12 dimensions: Big Five personality traits, overall personality score, and six interview performance metrics. To leverage this data, we propose Cross-Modal Regression with Manifold Fusion (CRMF), a geometric deep learning framework that explicitly models behavioral representations across hyperbolic, spherical, and Euclidean manifolds. CRMF employs geometry-specific expert networks to capture hierarchical trait structures, directional behavioral patterns, and continuous performance variations simultaneously. An adaptive routing mechanism dynamically weights expert contributions based on input characteristics. Through principled tangent space fusion, CRMF achieves superior performance while training 40-50% fewer trainable parameters than large multimodal models. Extensive experiments demonstrate that CRMF substantially outperforms the selected baselines, achieving up to 11.4% improvement in Spearman correlation and 6.0% in concordance index. Our RecruitView dataset is publicly available at https://huggingface.co/datasets/AI4A-lab/RecruitView
- Asia > India > Rajasthan > Jaipur (0.04)
- North America > United States (0.04)
- Europe > Finland > Uusimaa > Helsinki (0.04)
- Research Report (1.00)
- Personal > Interview (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (0.87)
Melody or Machine: Detecting Synthetic Music with Dual-Stream Contrastive Learning
Batra, Arnesh, Sharma, Dev, Thukral, Krish, Bhatia, Ruhani, Batra, Naman, Gautam, Aditya
The rapid evolution of end-to-end AI music generation poses an escalating threat to artistic authenticity and copyright, demanding detection methods that can keep pace. While foundational, existing models like SpecTTTra falter when faced with the diverse and rapidly advancing ecosystem of new generators, exhibiting significant performance drops on out-of-distribution (OOD) content. This generalization failure highlights a critical gap: the need for more challenging benchmarks and more robust detection architectures. To address this, we first introduce Melody or Machine (MoM), a new large-scale benchmark of over 130,000 songs (6,665 hours). MoM is the most diverse dataset to date, built with a mix of open and closed-source models and a curated OOD test set designed specifically to foster the development of truly generalizable detectors. Alongside this benchmark, we introduce CLAM, a novel dual-stream detection architecture. We hypothesize that subtle, machine-induced inconsistencies between vocal and instrumental elements, often imperceptible in a mixed signal, offer a powerful tell-tale sign of synthesis. CLAM is designed to test this hypothesis by employing two distinct pre-trained audio encoders (MERT and Wave2Vec2) to create parallel representations of the audio. These representations are fused by a learnable cross-aggregation module that models their inter-dependencies. The model is trained with a dual-loss objective: a standard binary cross-entropy loss for classification, complemented by a contrastive triplet loss which trains the model to distinguish between coherent and artificially mismatched stream pairings, enhancing its sensitivity to synthetic artifacts without presuming a simple feature alignment. CLAM establishes a new state-of-the-art in synthetic music forensics. It achieves an F1 score of 0.925 on our challenging MoM benchmark.
- Research Report > Experimental Study (0.68)
- Research Report > New Finding (0.46)
- Media > Music (1.00)
- Leisure & Entertainment (1.00)
- Information Technology (1.00)
Scalable Multisubject Vital Sign Monitoring With mmWave FMCW Radar and FPGA Prototyping
Benny, Jewel, Moudhgalya, Narahari N., Khan, Mujeev, Meena, Hemant Kumar, Wajid, Mohd, Srivastava, Abhishek
Abstract--In this work, we introduce an innovative approach to estimate the vital signs of multiple human subjects simultaneously in a non-contact way using a Frequency Modulated Continuous Wave (FMCW) radar-based system. This work also explores the ambitious goal of extending this capability to an arbitrary number of subjects and details the associated challenges, encompassing both hardware and theoretical limitations. Supported by rigorous experimental results and discussions, the paper paints a vivid picture of the system's potential to redefine vital sign monitoring. An FPGA-based implementation is also presented as proof of concept of an entirely hardware-based and portable solution to vitals monitoring, which improves upon previous works in a multitude of ways, offering 2.7x faster execution and 18.4% lesser Look-Up T able (LUT) utilization and providing over 7400x acceleration compared to its software counterpart. A promising solution to overcome these issues is radar sensing technology for HR and BR measurement, offering non-contact capabilities. This approach also extends to applications including sleep apnea detection [5], fall detection [6] and patient monitoring [7]. This work was supported by the Chips to Startup (C2S) program, Ministry of Electronics and Information Technology (MeitY), Govt. of India, IHub Mobility, IIIT Hyderabad, Kohli Center on Intelligent Systems (KCIS), IIIT Hyderabad and IHub Anubhuti-IIIT Delhi Foundation. Continuous-wave (CW) Doppler Radar systems have significantly advanced this field, addressing various technical challenges in HR and BR measurement [8] [9].
- Europe > Netherlands > South Holland > Delft (0.04)
- North America > United States > Texas (0.04)
- North America > United States > New Jersey > Hudson County > Hoboken (0.04)
- (4 more...)
Bharat Scene Text: A Novel Comprehensive Dataset and Benchmark for Indian Language Scene Text Understanding
De, Anik, Penamakuri, Abhirama Subramanyam, Yadav, Rajeev, Rathore, Aditya, Shah, Harshiv, Sharma, Devesh, Agarwal, Sagar, Kumar, Pravin, Mishra, Anand
Reading scene text, that is, text appearing in images, has numerous application areas, including assistive technology, search, and e-commerce. Although scene text recognition in English has advanced significantly and is often considered nearly a solved problem, Indian language scene text recognition remains an open challenge. This is due to script diversity, non-standard fonts, and varying writing styles, and, more importantly, the lack of high-quality datasets and open-source models. To address these gaps, we introduce the Bharat Scene Text Dataset (BSTD) - a large-scale and comprehensive benchmark for studying Indian Language Scene Text Recognition. It comprises more than 100K words that span 11 Indian languages and English, sourced from over 6,500 scene images captured across various linguistic regions of India. The dataset is meticulously annotated and supports multiple scene text tasks, including: (i) Scene Text Detection, (ii) Script Identification, (iii) Cropped Word Recognition, and (iv) End-to-End Scene Text Recognition. We evaluated state-of-the-art models originally developed for English by adapting (fine-tuning) them for Indian languages. Our results highlight the challenges and opportunities in Indian language scene text recognition. We believe that this dataset represents a significant step toward advancing research in this domain. All our models and data are open source.
- Transportation > Ground (0.46)
- Information Technology > Services (0.34)