Goto

Collaborating Authors

 rsna


Multi-GranularityCross-modalAlignmentfor GeneralizedMedicalVisualRepresentationLearning (SupplementaryMaterial)

Neural Information Processing Systems

We use the open-source mimic-cxr repository4 to extract impression and findings for each report. Following [9], we pick out sequences of alphanumeric characters and drop all other characters and symbols for all reports, and remove reports which contain less than3 tokens. Following common practice in ViT [5], we split the radiograph with patch size16 16,which results in 196 visual tokens for each image. The instance-level projection layer is a two-layer MultiLayer Perceptron (MLP) with Batch Normalization [10] and ReLU activation function. Additionally, we use a frozen Batch Normalization layer after the MLP toobtain instance-levelembeddings.


RadioRAG: Factual Large Language Models for Enhanced Diagnostics in Radiology Using Dynamic Retrieval Augmented Generation

Arasteh, Soroosh Tayebi, Lotfinia, Mahshad, Bressem, Keno, Siepmann, Robert, Ferber, Dyke, Kuhl, Christiane, Kather, Jakob Nikolas, Nebelung, Sven, Truhn, Daniel

arXiv.org Artificial Intelligence

Large language models (LLMs) have advanced the field of artificial intelligence (AI) in medicine. However LLMs often generate outdated or inaccurate information based on static training datasets. Retrieval augmented generation (RAG) mitigates this by integrating outside data sources. While previous RAG systems used pre-assembled, fixed databases with limited flexibility, we have developed Radiology RAG (RadioRAG) as an end-to-end framework that retrieves data from authoritative radiologic online sources in real-time. RadioRAG is evaluated using a dedicated radiologic question-and-answer dataset (RadioQA). We evaluate the diagnostic accuracy of various LLMs when answering radiology-specific questions with and without access to additional online information via RAG. Using 80 questions from RSNA Case Collection across radiologic subspecialties and 24 additional expert-curated questions, for which the correct gold-standard answers were available, LLMs (GPT-3.5-turbo, GPT-4, Mistral-7B, Mixtral-8x7B, and Llama3 [8B and 70B]) were prompted with and without RadioRAG. RadioRAG retrieved context-specific information from www.radiopaedia.org in real-time and incorporated them into its reply. RadioRAG consistently improved diagnostic accuracy across all LLMs, with relative improvements ranging from 2% to 54%. It matched or exceeded question answering without RAG across radiologic subspecialties, particularly in breast imaging and emergency radiology. However, degree of improvement varied among models; GPT-3.5-turbo and Mixtral-8x7B-instruct-v0.1 saw notable gains, while Mistral-7B-instruct-v0.2 showed no improvement, highlighting variability in its effectiveness. LLMs benefit when provided access to domain-specific data beyond their training data. For radiology, RadioRAG establishes a robust framework that substantially improves diagnostic accuracy and factuality in radiological question answering.


Philips Launches AI-Enabled 'Visualization Workspace' at RSNA

#artificialintelligence

Emphasizing a variety of artificial intelligence (AI) algorithms and workflows across multiple imaging modalities and disciplines on one platform, Philips launched a new edition of the Advanced Visualization Workspace at the Radiological Society of North America (RSNA) annual conference in Chicago. With the goals of streamlined radiology workflows and improved diagnostic confidence, the Advanced Visualization Workspace provides a suite of enhanced imaging options with more than 70 clinical applications for radiology, oncology, neurology, and cardiology, according to the company. "At this year's RSNA, Philips will showcase how our informatics solutions use intelligence to provide patient-centric insights, integrate advanced visualization tools into the workflow and support clinical collaboration to speed up detection of diseases by leveraging intelligence everywhere along the patient care journey," noted Reema Poddar, the president of Diagnostic and Pathway Informatics at Philips.


RSNA Cervical Spine Fracture AI Challenge Results Announced

#artificialintelligence

November 23, 2022 -- The Radiological Society of North America (RSNA) has announced the official results of the RSNA Cervical Spine Fracture AI Challenge. Conducted by RSNA in collaboration with the American Society of Neuroradiology (ASNR) and the American Society of Spine Radiology (ASSR), the aim of the challenge was to explore whether artificial intelligence (AI) could be used to aid in the detection and localization of cervical spine injuries. The top eight teams will be recognized in a presentation on Nov. 28, in the AI Showcase during RSNA's 108th Scientific Assembly and Annual Meeting in Chicago (RSNA 2022). The RSNA Cervical Spine Fracture AI Challenge was conducted on a platform provided by Kaggle, Inc. The top performing competitors will be awarded a total of $30,000.


Receptivity of an AI Cognitive Assistant by the Radiology Community: A Report on Data Collected at RSNA

Kanjaria, Karina, Pillai, Anup, Shivade, Chaitanya, Bendersky, Marina, Jadhav, Ashutosh, Mukherjee, Vandana, Syeda-Mahmood, Tanveer

arXiv.org Artificial Intelligence

Due to advances in machine learning and artificial intelligence (AI), a new role is emerging for machines as intelligent assistants to radiologists in their clinical workflows. But what systematic clinical thought processes are these machines using? Are they similar enough to those of radiologists to be trusted as assistants? A live demonstration of such a technology was conducted at the 2016 Scientific Assembly and Annual Meeting of the Radiological Society of North America (RSNA). The demonstration was presented in the form of a question-answering system that took a radiology multiple choice question and a medical image as inputs. The AI system then demonstrated a cognitive workflow, involving text analysis, image analysis, and reasoning, to process the question and generate the most probable answer. A post demonstration survey was made available to the participants who experienced the demo and tested the question answering system. Of the reported 54,037 meeting registrants, 2,927 visited the demonstration booth, 1,991 experienced the demo, and 1,025 completed a post-demonstration survey. In this paper, the methodology of the survey is shown and a summary of its results are presented. The results of the survey show a very high level of receptiveness to cognitive computing technology and artificial intelligence among radiologists.


Radiology Initiatives Illustrate Uses for Open Data and Open AI research

#artificialintelligence

Andy OramFans of data in health care often speculate about what clinicians and researchers could achieve by reducing friction in data sharing. What if we had easy access to group repositories, expert annotations and labels, robust and consistent metadata, and standards without inconsistencies? Since 2017, the Radiological Society of North America (RSNA) has been displaying a model for such data sharing. That year marked RSNA's first AI challenge. RSNA has worked since then to make the AI challenge an increasingly international collaboration.


Radiology: Artificial Intelligence

#artificialintelligence

Radiology: Artificial Intelligence will host its second tweet chat on July 1, 2020, from 8:00 to 9:00 pm Eastern Daylight Time (U.S.), on the topic of interpretability of AI algorithms in radiology. The tweet chat will be moderated by Dr. Despina Kontos, deputy editor of this journal and associate professor of radiology at the University of Pennsylvania, and Dr. Aimilia Gastounioti, a research associate in the department of radiology at the University of Pennsylvania. The article discusses the methods that allow AI systems to explain their decisions through visualization, counterexamples, and semantics--and the many challenges to bring interpretability methods into clinical practice. Enhancing interpretability is essential to allow for AI systems to be trusted and verified for faster and more reliable adoption into clinical workflows. They explore whether patients and radiologists can better trust a model that explains its decisions, and how interpretability may accelerate the translation of deep learning tools into clinical practice.


Lunit Showcases AI Solutions and Software at RSNA

#artificialintelligence

Lunit, a leading medical AI software company devoted to providing AI-powered total cancer care, will be returning to the 105th Radiological Society of North America (RSNA) this year with the latest, up-to-date AI solutions for chest and breast radiology. The state-of-the-art software--Lunit INSIGHT CXR 3 and Lunit INSIGHT MMG--will be available for demonstration at the Lunit booth located on AI Showcase floor, #10732. During RSNA 2019, Lunit will present key clinical study results conducted to validate the specific clinical utility of its products, along with other abstracts that study AI-driven mammography & DBT, and AI-based detection of chest abnormalities such as pneumothorax and tuberculosis. Lunit is one of the few companies in the industry that highlights evidence-based studies and publications. Lunit INSIGHT CXR and Lunit INSIGHT MMG, Lunit's most mature products tested on more than 3 million images from over 80 countries combined, will also be presented for demo.


Artificial intelligence may help reduce gadolinium dose in MRI

#artificialintelligence

CHICAGO - Researchers are using artificial intelligence to reduce the dose of a contrast agent that may be left behind in the body after MRI exams, according to a study being presented today at the annual meeting of the Radiological Society of North America (RSNA). Gadolinium is a heavy metal used in contrast material that enhances images on MRI. Recent studies have found that trace amounts of the metal remain in the bodies of people who have undergone exams with certain types of gadolinium. The effects of this deposition are not known, but radiologists are working proactively to optimize patient safety while preserving the important information that gadolinium-enhanced MRI scans provide. "There is concrete evidence that gadolinium deposits in the brain and body," said study lead author Enhao Gong, Ph.D., researcher at Stanford University in Stanford, Calif.


Can AI Help to Save the Practice of Radiology for the Future?

#artificialintelligence

In what was perhaps one of the most memorable openings in literature in English, Charles Dickens began his immortal A Tale of Two Cities with this: "It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct o heaven, we were all going direct the other way--in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only." And yes, that was one long, run-on sentence….! And yes, participating in RSNA 2017, this year's edition of the annual RSNA Conference (sponsored by the Oak Brook, Ill.-based Radiological Society of North America), did bring to mind Dickens' astonishing opening to his great 1859 novel. And though I saw no one at RSNA 2017 who reminded me at all of Sydney Carton, Lucie Manette, Charles Darnay, or Madame Defarge, I did actually think a bit about France in 1775 (on the eve of the French Revolution). Here's the thing: the practice of radiology, as we've all known it, is moving into uncharted territory now, as the financial, operational, and medical practice model on which it's been based, is shifting under the feet of today's radiologists. With both Medicare and private-insurer payment under accelerating threat (let's face it, diagnostic imaging procedures are an easy target for reimbursement deficit-hawk types), and with the demands for speed of turnaround for interpretive reports also accelerating, there are literally not enough hours in the day for practicing radiologists to make up growing income shortfalls from ongoing reductions in payment from all sources.