screening tool
Auditing for Human Expertise
High-stakes prediction tasks (e.g., patient diagnosis) are often handled by trained human experts. A common source of concern about automation in these settings is that experts may exercise intuition that is difficult to model and/or have access to information (e.g., conversations with a patient) that is simply unavailable to a would-be algorithm. This raises a natural question whether human experts add value which could not be captured by an algorithmic predictor.We develop a statistical framework under which we can pose this question as a natural hypothesis test. Indeed, as our framework highlights, detecting human expertise is more subtle than simply comparing the accuracy of expert predictions to those made by a particular learning algorithm. Instead, we propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest after conditioning on the available inputs ('features'). A rejection of our test thus suggests that human experts may add value to any algorithm trained on the available data, and has direct implications for whether human-AI'complementarity' is achievable in a given prediction task.We highlight the utility of our procedure using admissions data collected from the emergency department of a large academic hospital system, where we show that physicians' admit/discharge decisions for patients with acute gastrointestinal bleeding (AGIB) appear to be incorporating information that is not available to a standard algorithmic screening tool. This is despite the fact that the screening tool is arguably more accurate than physicians' discretionary decisions, highlighting that - even absent normative concerns about accountability or interpretability - accuracy is insufficient to justify algorithmic automation.
Auditing for Human Expertise
High-stakes prediction tasks (e.g., patient diagnosis) are often handled by trained human experts. A common source of concern about automation in these settings is that experts may exercise intuition that is difficult to model and/or have access to information (e.g., conversations with a patient) that is simply unavailable to a would-be algorithm. This raises a natural question whether human experts add value which could not be captured by an algorithmic predictor.We develop a statistical framework under which we can pose this question as a natural hypothesis test. Indeed, as our framework highlights, detecting human expertise is more subtle than simply comparing the accuracy of expert predictions to those made by a particular learning algorithm. Instead, we propose a simple procedure which tests whether expert predictions are statistically independent from the outcomes of interest after conditioning on the available inputs ('features').
Evaluating the Economic Implications of Using Machine Learning in Clinical Psychiatry
Hossain, Soaad, Rasalingam, James, Waheed, Arhum, Awil, Fatah, Kandiah, Rachel, Ahmed, Syed Ishtiaque
With the growing interest in using AI and machine learning (ML) in medicine, there is an increasing number of literature covering the application and ethics of using AI and ML in areas of medicine such as clinical psychiatry. The problem is that there is little literature covering the economic aspects associated with using ML in clinical psychiatry. This study addresses this gap by specifically studying the economic implications of using ML in clinical psychiatry. In this paper, we evaluate the economic implications of using ML in clinical psychiatry through using three problem-oriented case studies, literature on economics, socioeconomic and medical AI, and two types of health economic evaluations. In addition, we provide details on fairness, legal, ethics and other considerations for ML in clinical psychiatry.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States (0.04)
- North America > Canada > Nunavut (0.04)
- (4 more...)
Mpox Screen Lite: AI-Driven On-Device Offline Mpox Screening for Low-Resource African Mpox Emergency Response
Kularathne, Yudara, Janitha, Prathapa, Ambepitiya, Sithira
Background: The 2024 Mpox outbreak, particularly severe in Africa with clade 1b emergence, has highlighted critical gaps in diagnostic capabilities in resource-limited settings. This study aimed to develop and validate an artificial intelligence (AI)-driven, on-device screening tool for Mpox, designed to function offline in low-resource environments. Methods: We developed a YOLOv8n-based deep learning model trained on 2,700 images (900 each of Mpox, other skin conditions, and normal skin), including synthetic data. The model was validated on 360 images and tested on 540 images. A larger external validation was conducted using 1,500 independent images. Performance metrics included accuracy, precision, recall, F1-score, sensitivity, and specificity. Findings: The model demonstrated high accuracy (96%) in the final test set. For Mpox detection, it achieved 93% precision, 97% recall, and an F1-score of 95%. Sensitivity and specificity for Mpox detection were 97% and 96%, respectively. Performance remained consistent in the larger external validation, confirming the model's robustness and generalizability. Interpretation: This AI-driven screening tool offers a rapid, accurate, and scalable solution for Mpox detection in resource-constrained settings. Its offline functionality and high performance across diverse datasets suggest significant potential for improving Mpox surveillance and management, particularly in areas lacking traditional diagnostic infrastructure.
- Asia > Singapore (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Africa > Democratic Republic of the Congo (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
These AI-powered apps can hear the cause of a cough
The app failed to detect TB in about 30% of people who actually had the disease. But it's simpler and vastly cheaper than collecting phlegm to look for the bacterium that causes the disease, the gold-standard method for diagnosing TB. So it could prove especially useful in low-income countries as a screening tool, helping to catch cases and interrupting transmission. In the new study, a team of researchers from the US and Kenya trained and tested their smartphone-based diagnostic tool on recordings of coughs collected in a Kenyan health-care center--about 33,000 spontaneous coughs and 1,200 forced coughs from 149 people with TB and 46 people with other respiratory conditions. The app's performance wasn't good enough to replace traditional diagnostics. But it could be used as an additional screening tool.
- Africa > Kenya (0.28)
- North America > United States > Florida (0.08)
AI could provide the 'ultimate second opinion' as scientists say it is just as good as doctors at analysing X-rays
Artificial intelligence could provide the'ultimate second opinion' as it is just as good as doctors at analysing X-rays, scientists have claimed. Tests using AI software on millions of old scans diagnosed conditions at least as accurately as radiologists 94 per cent of the time. The joint study by Warwick University and King's College London suggested it could prove vital in avoiding human error when checking patients' results. The AI software, which can scan X-rays as soon as they are taken, is able to understand the seriousness of each condition and flag the more urgent ones immediately. The study's authors suggested it could be used to screen X-rays, freeing up time for busy doctors to focus on more critical patients and helping deal with chronic NHS staffing shortages.
- Europe > United Kingdom (0.40)
- North America > United States > Montana (0.08)
- Health & Medicine > Nuclear Medicine (0.46)
- Health & Medicine > Diagnostic Medicine > Imaging (0.46)
Brain MRI Screening Tool with Federated Learning
Stoklasa, Roman, Stathopoulos, Ioannis, Karavasilis, Efstratios, Efstathopoulos, Efstathios, Dostál, Marek, Keřkovský, Miloš, Kozubek, Michal, Serio, Luigi
The goal of our work is to develop a Screening Tool, software that would automatically evaluate all brain MRI scans In clinical practice, we often see significant delays between in a given hospital, and which would produce pre-diagnostic MRI scans and the diagnosis made by radiologists, even for reports for radiologists. Based on such reports, radiologists severe cases. In some cases, this may be caused by the lack could easily decide which examinations need to be processed of additional information and clues, so even the severe cases sooner and with higher priority, or, they might decide to process need to wait in the queue for diagnosis. This can be avoided if the "easy cases" first (i.e., cases that can be completed there is an automatic software tool, which would supplement quickly and easily), to increase diagnostic throughput. The additional information, alerting radiologists that the particular ultimate goal is to help decrease the waiting time between the patient may be a severe case. We are presenting an automatic scan and the diagnosis, especially for severe cases, by assisting brain MRI Screening Tool and we are demonstrating its capabilities radiologists to work more efficiently with better prioritization.
- Health & Medicine > Therapeutic Area (1.00)
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
Fake reviews: can we trust what we read online as use of AI explodes?
The four-star hotel in Kraków in Poland, the review says, is "excellent", a "short walk from the main square" and boasts a "first-rate" spa and fitness centre. A less positive review describes it as "small, cramped and outdated" with "lumpy" pillows. But then a family who stayed said they were made to feel "instantly welcome". The truth is that none of those reviews are real. They were generated in seconds by the free-to-use artificial intelligence tool ChatGPT.
Transfer Learning for Real-time Deployment of a Screening Tool for Depression Detection Using Actigraphy
Ghate, Rajanikant, Kalnad, Nayan, Walambe, Rahee, Kotecha, Ketan
Automated depression screening and diagnosis is a highly relevant problem today. There are a number of limitations of the traditional depression detection methods, namely, high dependence on clinicians and biased self-reporting. In recent years, research has suggested strong potential in machine learning (ML) based methods that make use of the user's passive data collected via wearable devices. However, ML is data hungry. Especially in the healthcare domain primary data collection is challenging. In this work, we present an approach based on transfer learning, from a model trained on a secondary dataset, for the real time deployment of the depression screening tool based on the actigraphy data of users. This approach enables machine learning modelling even with limited primary data samples. A modified version of leave one out cross validation approach performed on the primary set resulted in mean accuracy of 0.96, where in each iteration one subject's data from the primary set was set aside for testing.
- Europe > United Kingdom > England > Greater London > London (0.04)
- Asia > India > Maharashtra > Pune (0.04)
Artificial Intelligence Shows Promise in Detection of Anxiety Disorders, Depression
Artificial intelligence (AI) tools show promise in overcoming the limitations of traditional anxiety disorders and/or depression, according to the results of a study published in Springer. Investigators established that audio and/or facial video features have been most analyzed, followed by electroencephalography (EEG) signals, to detect anxiety disorders and/or depression. Traditional screening tools include the Columbia Suicide Screen, Risk of Suicide Questionnaire, Suicidal Ideation Questionnaire, and more. These screening programs are often used in schools to assess suicide risk, according to investigators. However, these traditional screening tools have limitations, such as a high prevalence of false positives, a lack of resources because of funding for the assessment programs in schools, others demands on educators and school counselors.