medical practice
Generative AI in Science: Applications, Challenges, and Emerging Questions
Harries, Ryan, Lawson, Cornelia, Shapira, Philip
This paper examines the impact of Generative Artificial Intelligence (GenAI) on scientific practices, conducting a qualitative review of selected literature to explore its applications, benefits, and challenges. The review draws on the OpenAlex publication database, using a Boolean search approach to identify scientific literature related to GenAI (including large language models and ChatGPT). Thirty-nine highly cited papers and commentaries are reviewed and qualitatively coded. Results are categorized by GenAI applications in science, scientific writing, medical practice, and education and training. The analysis finds that while there is a rapid adoption of GenAI in science and science practice, its long-term implications remain unclear, with ongoing uncertainties about its use and governance. The study provides early insights into GenAI's growing role in science and identifies questions for future research in this evolving field.
- North America > United States (0.14)
- Europe > United Kingdom > England > Greater Manchester > Manchester (0.05)
- Asia > Pakistan (0.04)
- Research Report > New Finding (0.93)
- Research Report > Experimental Study (0.66)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.86)
Limits of trust in medical AI
This is a pre-print version of an article published as: Hatherley, Joshua. Please cite that version. 2 Abstract: Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI's progress in medicine, however, has led to concerns regarding the potential effects of this technology upon relationships of trust in clinical practice. In this paper, I will argue that there is merit to these concerns, since AI systems can be relied upon, and are capable of reliability, but cannot be trusted, and are not capable of trustworthiness. Insofar as patients are required to rely upon AI systems for their medical decision-making, there is potential for this to produce a deficit of trust in relationships in clinical practice.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Europe > Netherlands (0.04)
BESTMVQA: A Benchmark Evaluation System for Medical Visual Question Answering
Hong, Xiaojie, Song, Zixin, Li, Liangzhi, Wang, Xiaoli, Liu, Feiyan
Medical Visual Question Answering (Med-VQA) is a very important task in healthcare industry, which answers a natural language question with a medical image. Existing VQA techniques in information systems can be directly applied to solving the task. However, they often suffer from (i) the data insufficient problem, which makes it difficult to train the state of the arts (SOTAs) for the domain-specific task, and (ii) the reproducibility problem, that many existing models have not been thoroughly evaluated in a unified experimental setup. To address these issues, this paper develops a Benchmark Evaluation SysTem for Medical Visual Question Answering, denoted by BESTMVQA. Given self-collected clinical data, our system provides a useful tool for users to automatically build Med-VQA datasets, which helps overcoming the data insufficient problem. Users also can conveniently select a wide spectrum of SOTA models from our model library to perform a comprehensive empirical study. With simple configurations, our system automatically trains and evaluates the selected models over a benchmark dataset, and reports the comprehensive results for users to develop new techniques or perform medical practice. Limitations of existing work are overcome (i) by the data generation tool, which automatically constructs new datasets from unstructured clinical data, and (ii) by evaluating SOTAs on benchmark datasets in a unified experimental setup. The demonstration video of our system can be found at https://youtu.be/QkEeFlu1x4A. Our code and data will be available soon.
77% of Doctors Believe Chatbots Will Treat Patients Within the Next 10 Years
AUSTIN, Texas--(BUSINESS WIRE)--AI advancements have ascended to the point of ChatGPT passing the U.S. Medical Licensing Exam, opening the door for practice integration and patient treatment. According to Software Advice's 2023 Medical Chatbot Survey, nearly half of doctors (45%) believe ChatGPT is a valuable tool and 77% believe that AI-powered chatbots will be able to treat patients safely within the next 10 years. Today's chatbots are mostly used for admin work at medical practices, and they are great at automating work. The three most common patient uses for medical chatbots include scheduling appointments (72%), requesting prescription refills (66%), and providing requested data like medical history (63%). A notable 46% of chatbots are currently being used to assess symptoms and determine whether a patient needs immediate assistance or can wait for an appointment, but we predict this number will grow rapidly in the next few years.
Evaluating GPT-4 and ChatGPT on Japanese Medical Licensing Examinations
Kasai, Jungo, Kasai, Yuhei, Sakaguchi, Keisuke, Yamada, Yutaro, Radev, Dragomir
As large language models (LLMs) gain popularity among speakers of diverse languages, we believe that it is crucial to benchmark them to better understand model behaviors, failures, and limitations in languages beyond English. In this work, we evaluate LLM APIs (ChatGPT, GPT-3, and GPT-4) on the Japanese national medical licensing examinations from the past five years, including the current year. Our team comprises native Japanese-speaking NLP researchers and a practicing cardiologist based in Japan. Our experiments show that GPT-4 outperforms ChatGPT and GPT-3 and passes all six years of the exams, highlighting LLMs' potential in a language that is typologically distant from English. However, our evaluation also exposes critical limitations of the current LLM APIs. First, LLMs sometimes select prohibited choices that should be strictly avoided in medical practice in Japan, such as suggesting euthanasia. Further, our analysis shows that the API costs are generally higher and the maximum context size is smaller for Japanese because of the way non-Latin scripts are currently tokenized in the pipeline. We release our benchmark as Igaku QA as well as all model outputs and exam metadata. We hope that our results and benchmark will spur progress on more diverse applications of LLMs. Our benchmark is available at https://github.com/jungokasai/IgakuQA.
- North America > United States (0.46)
- Asia > Middle East > Jordan (0.04)
- Asia > Japan > Honshū > Tōhoku (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.67)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.46)
Paging Dr. AI? What ChatGPT and artificial intelligence could mean for the future of medicine
Without cracking a single textbook, without spending a day in medical school, the co-author of a preprint study correctly answered enough practice questions that it would have passed the real US Medical Licensing Examination. But the test-taker wasn't a member of Mensa or a medical savant; it was the artificial intelligence ChatGPT. The tool, which was created to answer user questions in a conversational manner, has generated so much buzz that doctors and scientists are trying to determine what its limitations are – and what it could do for health and medicine. ChatGPT, or Chat Generative Pre-trained Transformer, is a natural language-processing tool driven by artificial intelligence. The technology, created by San Francisco-based OpenAI and launched in November, is not like a well-spoken search engine.
- Health & Medicine > Diagnostic Medicine > Imaging (0.31)
- Health & Medicine > Therapeutic Area > Immunology (0.30)
The amazing power of "machine eyes"
Today's report on AI of retinal vessel images to help predict the risk of heart attack and stroke, from over 65,000 UK Biobank participants, reinforces a growing body of evidence that deep neural networks can be trained to "interpret" medical images far beyond what was anticipated. Add that finding to last week's multinational study of deep learning of retinal photos to detect Alzheimer's disease with good accuracy. In this post I am going to briefly review what has already been gleaned from 2 classic medical images--the retina and the electrocardiogram (ECG)--as representative for the exciting capability of machine vision to "see" well beyond human limits. Obviously, machines aren't really seeing or interpreting and don't have eyes in the human sense, but they sure can be trained from hundreds of thousand (or millions) of images to come up with outputs that are extraordinary. I hope when you've read this you'll agree this is a particularly striking advance, which has not yet been actualized in medical practice, but has enormous potential.
Benefits of Artificial Intelligence in Healthcare You Should Know About
The sheer amount of data needed to make consistently accurate medical diagnoses is staggering. That's why many healthcare organizations are adopting artificial intelligence to improve decision-making and make nearly every aspect of running a practice easier and more efficient. According to Gartner, "The AI/smart machine era will be the most disruptive in the history of IT. … Eventually, these advances will redefine what it means to be a physician and a patient." Whether it's using machine learning to help with diagnoses or automation tools to communicate with patients, artificial intelligence can streamline medical processes so you have more time to focus on what's important: helping your patients. If you have never considered using artificial intelligence for your practice, don't trust artificial intelligence, or think your patients might not trust artificial intelligence, it's time to reconsider. A recent Software Advice survey* demonstrates that most patients trust the application of artificial intelligence in healthcare.
The key elements of healthcare will be technological innovations
In terms of healthcare technology, there has been a shift in recent years. In the healthcare industry, the pandemic has been the best accelerator of innovative technical implementations. To be sure, current technological advancements have helped the healthcare industry thrive. Beacons for crowd management and machine learning devices for disease detection and treatment approaches are just two examples of high-end technology that are now critical pieces of a healthcare unit. The medical staff has been able to devote more time to value-added tasks like thinking about how to improve patient care as a result of this adoption of modern technologies.
Diagnoss launches AI assistant to reduce medical coding errors
Startup Diagnoss has developed an artificial intelligence-based coding assistant to help automate the painstaking process of medical coding and billing. The Diagnoss AI medical coding engine acts as a "sidebar" to electronic health records (EHRs) and uses machine learning to improve a clinician's accuracy. The tool provides real-time feedback to medical practices during the administrative process and helps to reduce coding errors on claims. Abboud Chaballout, founder and CEO of Berkeley, California-based Diagnoss, compares the AI tool to an assistant whispering in a doctor's ear. The AI tool works similarly to the Grammarly AI grammar-checking tool.
- North America > United States > California > Alameda County > Berkeley (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.05)