toronto
When AI Gives Advice: Evaluating AI and Human Responses to Online Advice-Seeking for Well-Being
Kumar, Harsh, Chahal, Jasmine, Zhao, Yinuo, Zhang, Zeling, Wei, Annika, Tay, Louis, Anderson, Ashton
Seeking advice is a core human behavior that the Internet has reinvented twice: first through forums and Q\&A communities that crowdsource public guidance, and now through large language models (LLMs) that deliver private, on-demand counsel at scale. Yet the quality of this synthesized LLM advice remains unclear. How does it compare, not only against arbitrary human comments, but against the wisdom of the online crowd? We conducted two studies (N = 210) in which experts compared top-voted Reddit advice with LLM-generated advice. LLMs ranked significantly higher overall and on effectiveness, warmth, and willingness to seek advice again. GPT-4o beat GPT-5 on all metrics except sycophancy, suggesting that benchmark gains need not improve advice-giving. In our second study, we examined how human and algorithmic advice could be combined, and found that human advice can be unobtrusively polished to compete with AI-generated comments. Finally, to surface user expectations, we ran an exploratory survey with undergraduates (N=148) that revealed heterogeneous, persona-dependent preferences for agent qualities (e.g., coach-like: goal-focused structure; friend-like: warmth and humor). We conclude with design implications for advice-giving agents and ecosystems blending AI, crowd input, and expert oversight.
- North America > Canada > Ontario > Toronto (0.16)
- North America > United States > New York > New York County > New York City (0.05)
- Europe > Germany > Hamburg (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > Strength High (0.93)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (0.47)
- Education > Educational Setting > Higher Education (0.46)
Bridging Code Graphs and Large Language Models for Better Code Understanding
Chen, Zeqi, Chu, Zhaoyang, Gui, Yi, Guo, Feng, Wan, Yao, Shi, Chuan
Large Language Models (LLMs) have demonstrated remarkable performance in code intelligence tasks such as code generation, summarization, and translation. However, their reliance on linearized token sequences limits their ability to understand the structural semantics of programs. While prior studies have explored graphaugmented prompting and structure-aware pretraining, they either suffer from prompt length constraints or require task-specific architectural changes that are incompatible with large-scale instructionfollowing LLMs. To address these limitations, this paper proposes CGBridge, a novel plug-and-play method that enhances LLMs with Code Graph information through an external, trainable Bridge module. CGBridge first pre-trains a code graph encoder via selfsupervised learning on a large-scale dataset of 270K code graphs to learn structural code semantics. It then trains an external module to bridge the modality gap among code, graph, and text by aligning their semantics through cross-modal attention mechanisms. Finally, the bridge module generates structure-informed prompts, which are injected into a frozen LLM, and is fine-tuned for downstream code intelligence tasks. Experiments show that CGBridge achieves notable improvements over both the original model and the graphaugmented prompting method. Specifically, it yields a 16.19% and 9.12% relative gain in LLM-as-a-Judge on code summarization, and a 9.84% and 38.87% relative gain in Execution Accuracy on code translation. Moreover, CGBridge achieves over 4x faster inference than LoRA-tuned models, demonstrating both effectiveness and efficiency in structure-aware code understanding.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Austria > Vienna (0.14)
- (13 more...)
- Research Report > New Finding (0.46)
- Research Report > Promising Solution (0.34)
Matryoshka Model Learning for Improved Elastic Student Models
Verma, Chetan, Timmaraju, Aditya Srinivas, Hsieh, Cho-Jui, Damle, Suyash, Bui, Ngot, Zhang, Yang, Chen, Wen, Liu, Xin, Jain, Prateek, Dhillon, Inderjit S
Industry-grade ML models are carefully designed to meet rapidly evolving serving constraints, which requires significant resources for model development. In this paper, we propose MatTA, a framework for training multiple accurate Student models using a novel Teacher-TA-Student recipe. TA models are larger versions of the Student models with higher capacity, and thus allow Student models to better relate to the Teacher model and also bring in more domain-specific expertise. Furthermore, multiple accurate Student models can be extracted from the TA model. Therefore, despite only one training run, our methodology provides multiple servable options to trade off accuracy for lower serving cost. We demonstrate the proposed method, MatTA, on proprietary datasets and models. Its practical efficacy is underscored by live A/B tests within a production ML system, demonstrating 20% improvement on a key metric. We also demonstrate our method on GPT-2 Medium, a public model, and achieve relative improvements of over 24% on SAT Math and over 10% on the LAMBADA benchmark.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > Canada > Ontario > Toronto (0.05)
- North America > United States > California > Santa Clara County > Mountain View (0.05)
- (3 more...)
- Education > Educational Technology > Educational Software (1.00)
- Education > Educational Setting (0.94)
Frailty-Aware Transformer for Recurrent Survival Modeling of Driver Retention in Ride-Hailing Platforms
Xu, Shuoyan, Zhang, Yu, Miller, Eric J.
Abstract--Ride-hailing platforms are characterized by high-frequency, behavior-driven environments, such as shared mobility platforms. Although survival analysis has been widely applied to recurrent events in other domains, its use for modeling ride-hailing driver behavior remains largely unexplored. T o the best of our knowledge, this study is the first to formulate driver idle behavior as a recurrent survival process using large-scale platform data. This study proposes a survival analysis framework that uses a Transformer-based temporal encoder with causal masking to capture long-term temporal dependencies and embeds driver-specific embeddings to represent latent individual characteristics, significantly enhancing the personalized prediction of driver retention risk, modeling how historical idle sequences influence the current risk of leaving the platform via trip acceptance or log-off. The model is validated on datasets from the City of T oronto over the period January 2 to March 13, 2020. The results show that the proposed Frailty-A ware Cox Transformer (F ACT) delivers the highest time-dependent C-indices and the lowest Brier Scores across early, median, and late follow-up, demonstrating its robustness in capturing evolving risk over a driver's lifecycle. This study enables operators to optimize retention strategies and helps policy makers assess shared mobility's role in equitable and integrated transportation systems. The purpose of this study is to model the driver retention behavior through a transformer-based survival model. Shared mobility services, such as ride-hailing, car-sharing, and bike-sharing, are becoming an increasingly prominent component of contemporary transportation systems. These services are central to the broader concept of Mobility as a Service (MaaS) [1], which aims to integrate various forms of transport into a unified and user-centric platform.
- North America > Canada > Ontario > Toronto (0.06)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > India (0.04)
- Asia > China (0.04)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
EGMOF: Efficient Generation of Metal-Organic Frameworks Using a Hybrid Diffusion-Transformer Architecture
Han, Seunghee, Kang, Yeonghun, Bae, Taeun, Bernales, Varinia, Aspuru-Guzik, Alan, Kim, Jihan
Designing materials with targeted properties remain s challenging due to the vastness of chemical space and the scarcity of propert y-labeled data. While r ecent advances in generative models offer a promising w ay for inverse design, most approaches require large datasets and must be retrained for every new target property. Here, we introduce the EGMOF ( Efficient Generation of MOFs), a hybrid diffusion-transformer framework that overcome s these limitations through a modular, descriptor - mediated workflow. EGMOF decomposes inverse design into two steps: (1) a one -dimensional diffusion model (Prop2Desc) that maps desired properties to chemically meaningful descriptors followed by (2) a transformer model (Desc2MOF) that generates structures from the se descriptors. This modular hybrid design enables minimal retraining and maintains high accuracy even under small-data conditions. On a hydrogen uptake dataset, EGMOF achieved over 95 % validity and 84% hit rate, representing significant improvements of up to 57 % in validity and 14% in hit rate compared to existing methods, while remaining effective with only 1,000 training samples . Moreover, our model successfully performed conditional generation across 29 diverse property datasets, including CoREMOF, QMOF, and text - mined experimental datasets, whereas previous models have not. This work presents a data - efficient, generalizable approach to the inverse design of diverse MOFs and highlights the potential of modular inverse design workflows for broader materials discovery.
- North America > Canada > Ontario > Toronto (0.17)
- Asia > South Korea > Daejeon > Daejeon (0.04)
- Workflow (1.00)
- Research Report > New Finding (0.46)
Plan-and-Write: Structure-Guided Length Control for LLMs without Model Retraining
Akinfaderin, Adewale, Subramanian, Shreyas, Sehwag, Akarsha
Length control in Large Language Models (LLMs) is a crucial but under-addressed challenge, with applications ranging from voice interfaces requiring concise responses to research summaries needing comprehensive outputs. Current approaches to length control, including Regularized DPO, Length-Instruction Fine Tuning, and tool-augmented methods, typically require expensive model retraining or complex inference-time tooling. This paper presents a prompt engineering methodology that enables precise length control without model retraining. Our structure-guided approach implements deliberate planning and word counting mechanisms within the prompt, encouraging the model to carefully track and adhere to specified length constraints. Comprehensive evaluations across six state-of-the-art LLMs demonstrate that our method significantly improves length fidelity for several models compared to standard prompting when applied to document summarization tasks, particularly for shorter-to-medium length constraints. The proposed technique shows varying benefits across different model architectures, with some models demonstrating up to 37.6% improvement in length adherence. Quality evaluations further reveal that our approach maintains or enhances overall output quality compared to standard prompting techniques. Our approach provides an immediately deployable solution for applications requiring precise length control, particularly valuable for production environments where model retraining is impractical or cost-prohibitive.
- North America > Canada > Ontario > Toronto (0.06)
- North America > United States > Washington > King County > Seattle (0.05)
Schema for In-Context Learning
Chen, Pan, Chen, Shaohong, Wang, Mark, Leong, Shi Xuan, Fung, Priscilla, Bernales, Varinia, Aspuru-Guzik, Alan
In-Context Learning (ICL) enables transformer-based language models to adapt to new tasks by conditioning on demonstration examples. However, traditional example-driven in-context learning lacks explicit modules for knowledge retrieval and transfer at the abstraction level. Inspired by cognitive science, specifically schema theory, which holds that humans interpret new information by activating pre-existing mental frameworks (schemas) to structure understanding, we introduce SCHEMA ACTIVATED IN CONTEXT LEARNING (SA-ICL). This framework extracts the representation of the building blocks of cognition for the reasoning process instilled from prior examples, creating an abstracted schema, a lightweight, structured template of key inferential steps and their relationships, which is then used to augment a model's reasoning process when presented with a novel question. We demonstrate that a broad range of large language models (LLMs) lack the capacity to form and utilize internal schema-based learning representations implicitly, but instead benefit significantly from explicit schema-based scaffolding. Across chemistry and physics questions from the GPQA dataset, our experiments show that SA-ICL consistently boosts performance, up to 36.19 percent, when the single demonstration example is of high quality, which simultaneously reduces reliance on the number of demonstrations and enhances interpretability. SCHEMA ACTIVATED IN CONTEXT LEARNING not only bridges disparate ICL strategies ranging from pattern priming to Chain-of-Thought prompting, but also paves a new path for enhancing human-like reasoning in LLMs.
- North America > Canada > Ontario > Toronto (0.15)
- Europe > Austria > Vienna (0.14)
- North America > United States > Washington > King County > Seattle (0.04)
- (4 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science > Problem Solving (1.00)
ProtoTopic: Prototypical Network for Few-Shot Medical Topic Modeling
Licht, Martin, Ketabi, Sara, Khalvati, Farzad
Topic modeling is a useful tool for analyzing large corpora of written documents, particularly academic papers. Despite a wide variety of proposed topic modeling techniques, these techniques do not perform well when applied to medical texts. This can be due to the low number of documents available for some topics in the healthcare domain. In this paper, we propose ProtoTopic, a prototypical network-based topic model used for topic generation for a set of medical paper abstracts. Prototypical networks are efficient, explainable models that make predictions by computing distances between input datapoints and a set of prototype representations, making them particularly effective in low-data or few-shot learning scenarios. With ProtoTopic, we demonstrate improved topic coherence and diversity compared to two topic modeling baselines used in the literature, demonstrating the ability of our model to generate medically relevant topics even with limited data.
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.95)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
GroundSight: Augmenting Vision-Language Models with Grounding Information and De-hallucination
Chen, Xinxi, Chen, Tianyang, Hong, Lijia
We propose a method to improve Visual Question Answering (VQA) with Retrieval-Augmented Generation (RAG) by introducing text-grounded object localization. Rather than retrieving information based on the entire image, our approach enables the model to generate a bounding box around the object most relevant to the question, allowing for targeted image cropping and focused retrieval. This reduces background noise, improves alignment between visual and textual cues, and helps mitigate hallucinations. Our RAG method enhances context-aware VQA responses increased the accuracy from 22.19% to 25.64%, with an absolute increase of 3.45 percentage points, compared to the baseline Llama-3.2-Vision-11B agent. We also proposed a de-hallucination method based on question type which can effectively reduce the hallucination rate from 65.79% to 13.88% and improves the truthfulness score.
- North America > Canada > Ontario > Toronto (0.06)
- North America > United States > New Jersey (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > China > Hong Kong (0.04)
Score the Steps, Not Just the Goal: VLM-Based Subgoal Evaluation for Robotic Manipulation
ElMallah, Ramy, Chhajer, Krish, Lee, Chi-Guhn
Robot learning papers typically report a single binary success rate (SR), which obscures where a policy succeeds or fails along a multi-step manipulation task. We argue that subgoal-level reporting should become routine: for each trajectory, a vector of per-subgoal SRs that makes partial competence visible (e.g., grasp vs. pour). We propose a blueprint for StepEval, a cost-aware plug-in evaluation framework that utilizes vision-language models (VLMs) as automated judges of subgoal outcomes from recorded images or videos. Rather than proposing new benchmarks or APIs, our contribution is to outline design principles for a scalable, community-driven open-source project. In StepEval, the primary artifact for policy evaluation is the per-subgoal SR vector; however, other quantities (e.g., latency or cost estimates) are also considered for framework-optimization diagnostics to help the community tune evaluation efficiency and accuracy when ground-truth subgoal success labels are available. We discuss how such a framework can remain model-agnostic, support single- or multi-view inputs, and be lightweight enough to adopt across labs. The intended contribution is a shared direction: a minimal, extensible seed that invites open-source contributions, so that scoring the steps, not just the final goal, becomes a standard and reproducible practice.
- North America > Canada > Ontario > Toronto (0.15)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Japan > Shikoku > Kagawa Prefecture > Takamatsu (0.04)