Goto

Collaborating Authors

 Surana, Harshul


NeuroLit Navigator: A Neurosymbolic Approach to Scholarly Article Searches for Systematic Reviews

arXiv.org Artificial Intelligence

The introduction of Large Language Models (LLMs) has significantly impacted various fields, including education, for example, by enabling the creation of personalized learning materials. However, their use in Systematic Reviews (SRs) reveals limitations such as restricted access to specialized vocabularies, lack of domain-specific reasoning, and a tendency to generate inaccurate information. Existing SR tools often rely on traditional NLP methods and fail to address these issues adequately. To overcome these challenges, we developed the ``NeuroLit Navigator,'' a system that combines domain-specific LLMs with structured knowledge sources like Medical Subject Headings (MeSH) and the Unified Medical Language System (UMLS). This integration enhances query formulation, expands search vocabularies, and deepens search scopes, enabling more precise searches. Deployed in multiple universities and tested by over a dozen librarians, the NeuroLit Navigator has reduced the time required for initial literature searches by 90\%. Despite this efficiency, the initial set of articles retrieved can vary in relevance and quality. Nonetheless, the system has greatly improved the reproducibility of search results, demonstrating its potential to support librarians in the SR process.


Large Language Models for Mental Health Diagnostic Assessments: Exploring The Potential of Large Language Models for Assisting with Mental Health Diagnostic Assessments -- The Depression and Anxiety Case

arXiv.org Artificial Intelligence

Large language models (LLMs) are increasingly attracting the attention of healthcare professionals for their potential to assist in diagnostic assessments, which could alleviate the strain on the healthcare system caused by a high patient load and a shortage of providers. For LLMs to be effective in supporting diagnostic assessments, it is essential that they closely replicate the standard diagnostic procedures used by clinicians. In this paper, we specifically examine the diagnostic assessment processes described in the Patient Health Questionnaire-9 (PHQ-9) for major depressive disorder (MDD) and the Generalized Anxiety Disorder-7 (GAD-7) questionnaire for generalized anxiety disorder (GAD). We investigate various prompting and fine-tuning techniques to guide both proprietary and open-source LLMs in adhering to these processes, and we evaluate the agreement between LLM-generated diagnostic outcomes and expert-validated ground truth. For fine-tuning, we utilize the Mentalllama and Llama models, while for prompting, we experiment with proprietary models like GPT-3.5 and GPT-4o, as well as open-source models such as llama-3.1-8b and mixtral-8x7b.