second opinion
'We didn't vote for ChatGPT': Swedish PM under fire for using AI in role
The Swedish prime minister, Ulf Kristersson, has come under fire after admitting that he regularly consults AI tools for a second opinion in his role running the country. Kristersson, whose Moderate party leads Sweden's centre-right coalition government, said he used tools including ChatGPT and the French service LeChat. His colleagues also used AI in their daily work, he said. Kristersson told the Swedish business newspaper Dagens industri: "I use it myself quite often. And should we think the complete opposite? Tech experts, however, have raised concerns about politicians using AI tools in such a way, and the Aftonbladet newspaper accused Kristersson in a editorial of having "fallen for the oligarchs' AI psychosis". "You have to be very careful," Simone Fischer-Hübner, a computer science researcher at Karlstad University, told Aftonbladet, warning against using ChatGPT to work with sensitive information. Kristersson's spokesperson, Tom Samuelsson, later said the prime minister did not take risks in his use of AI. "Naturally it is not security sensitive information that ends up there.
- Europe > Sweden > Värmland County > Karlstad (0.28)
- North America > United States > Virginia (0.08)
- Europe > Sweden > Västerbotten County > Umeå (0.08)
A Layered Multi-Expert Framework for Long-Context Mental Health Assessments
Tang, Jinwen, Guo, Qiming, Sun, Wenbo, Shang, Yi
Long-form mental health assessments pose unique challenges for large language models (LLMs), which often exhibit hallucinations or inconsistent reasoning when handling extended, domain-specific contexts. We introduce Stacked Multi-Model Reasoning (SMMR), a layered framework that leverages multiple LLMs and specialized smaller models as coequal 'experts'. Early layers isolate short, discrete subtasks, while later layers integrate and refine these partial outputs through more advanced long-context models. We evaluate SMMR on the DAIC-WOZ depression-screening dataset and 48 curated case studies with psychiatric diagnoses, demonstrating consistent improvements over single-model baselines in terms of accuracy, F1-score, and PHQ-8 error reduction. By harnessing diverse 'second opinions', SMMR mitigates hallucinations, captures subtle clinical nuances, and enhances reliability in high-stakes mental health assessments. Our findings underscore the value of multi-expert frameworks for more trustworthy AI-driven screening.
- North America > United States > Missouri > Boone County > Columbia (0.14)
- Europe > Netherlands > South Holland > Delft (0.04)
- Asia > Middle East > Iraq (0.04)
- (2 more...)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Consumer Health (1.00)
Does More Advice Help? The Effects of Second Opinions in AI-Assisted Decision Making
Lu, Zhuoran, Wang, Dakuo, Yin, Ming
AI assistance in decision-making has become popular, yet people's inappropriate reliance on AI often leads to unsatisfactory human-AI collaboration performance. In this paper, through three pre-registered, randomized human subject experiments, we explore whether and how the provision of {second opinions} may affect decision-makers' behavior and performance in AI-assisted decision-making. We find that if both the AI model's decision recommendation and a second opinion are always presented together, decision-makers reduce their over-reliance on AI while increase their under-reliance on AI, regardless whether the second opinion is generated by a peer or another AI model. However, if decision-makers have the control to decide when to solicit a peer's second opinion, we find that their active solicitations of second opinions have the potential to mitigate over-reliance on AI without inducing increased under-reliance in some cases. We conclude by discussing the implications of our findings for promoting effective human-AI collaborations in decision-making.
- North America > United States > Oregon > Multnomah County > Portland (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- Asia > Malaysia (0.04)
- (2 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Health & Medicine (1.00)
- Banking & Finance > Trading (1.00)
- Media (0.94)
AI may be on its way to your doctor's office, but it's not ready to see patients
What use could healthcare have for someone who makes things up, can't keep a secret, doesn't really know anything, and, when speaking, simply fills in the next word based on what's come before? Lots, if that individual is the newest form of artificial intelligence, according to some of the biggest companies out there. Companies pushing the latest AI technology -- known as "generative AI" -- are piling on: Google and Microsoft want to bring types of so-called large language models to healthcare. Big firms that are familiar to folks in white coats -- but maybe less so to your average Joe and Jane -- are equally enthusiastic: Electronic medical records giants Epic and Oracle Cerner aren't far behind. The space is crowded with startups, too.
- North America > United States > Montana > Yellowstone County > Billings (0.05)
- North America > United States > California > San Diego County > San Diego (0.05)
- Asia > Middle East > Israel (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.56)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.38)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.36)
VIDEO: Overview of radiology AI by Keith Dreyer
Keith J. Dreyer, DO, PhD, FACR, American College of Radiology (ACR) Data Science Institute Chief Science Officer, explains the state of artificial intelligence (AI) in radiology in 2022. Although there are about 200 AI algorithms for medical imaging now cleared by the U.S. Food and Drug Administration (FDA), a recent ACR survey of its members showed AI only has about a 2% market penetration rate. "So, there is about another 98% that fall into the category of potential addressable market," Dreyer said. "Now why is that when there is a lot of enthusiasm and we are past the days from six years ago when radiologists were fearful of losing their jobs to AI because Geoffrey Hinton said we should stop training radiologists because AI will take over in another 5 years. That was in 2016, and are now past the five-year mark and it's ridiculous, because today there is an incredible shortage of radiologists."
- Health & Medicine > Nuclear Medicine (1.00)
- Health & Medicine > Diagnostic Medicine > Imaging (1.00)
- Government > Regional Government > North America Government > United States Government > FDA (0.56)
Counterfactual Inference of Second Opinions
Benz, Nina L. Corvelo, Rodriguez, Manuel Gomez
Automated decision support systems that are able to infer second opinions from experts can potentially facilitate a more efficient allocation of resources; they can help decide when and from whom to seek a second opinion. In this paper, we look at the design of this type of support systems from the perspective of counterfactual inference. We focus on a multiclass classification setting and first show that, if experts make predictions on their own, the underlying causal mechanism generating their predictions needs to satisfy a desirable set invariant property. Further, we show that, for any causal mechanism satisfying this property, there exists an equivalent mechanism where the predictions by each expert are generated by independent sub-mechanisms governed by a common noise. This motivates the design of a set invariant Gumbel-Max structural causal model where the structure of the noise governing the sub-mechanisms underpinning the model depends on an intuitive notion of similarity between experts which can be estimated from data. Experiments on both synthetic and real data show that our model can be used to infer second opinions more accurately than its non-causal counterpart.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- South America (0.04)
- North America > Central America (0.04)
- Europe > Switzerland > Zürich > Zürich (0.04)
AI Takes Bite Out of Dental Slide Misses by Assisting Doctors
Your next trip to the dentist might offer a taste of AI. Pearl, a West Hollywood startup, provides AI for dental images to assist in diagnosis. It landed FDA clearance last month, the first to get such a go-ahead for dentistry AI. The approval paves the way for its use in clinics across the United States. "It's really a first of its kind for dentistry," said Ophir Tanz, co-founder and CEO of Pearl.
First FDA Approved AI Software Can Now Read Dental Xrays
The Food and Drug Administration has approved the first artificial intelligence software to be used to interpret dental x-rays, allowing dentists to better screen for oral pathologies. Pearl's Second Opinion is the first and only FDA-cleared AI radiologic detection aid for dentists at the chairside, and it can assist dentists to discover a variety of common dental diseases such as tooth decay, calculus, and root abscesses. Pearl gathered over 100 million dental x-rays from dental practices and academic institutes to create Second Opinion. The AI platform highlights anomalies in x-rays and also acts as a patient communication tool, allowing dentists to exhibit alternative models of a patient's teeth and highlight trouble regions. Pearl's announcement is a significant step forward in the field of technology-assisted dentistry.
- Research Report > Strength High (0.40)
- Research Report > New Finding (0.40)
La veille de la cybersécurité
But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes? A pair of Canadian mental-health researchers believe it can. In a study published in the Journal of Applied Behavior Analysis, Marc Lanovaz of Université de Montréal and Kieva Hranchuk of St. Lawrence College, in Ontario, make a case for using AI in treating behavioral problems. To find a better way, Lanovaz and Hranchuk, a professor of behavioral science and behavioral psychology at St. Lawrence, compiled simulated data from 1,024 individuals receiving treatment for behavioral issues.
- North America > Canada > Quebec > Montreal (0.27)
- North America > Canada > Ontario (0.27)
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.27)
Study: AI can make better clinical decisions than humans
But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes? A pair of Canadian mental-health researchers believe it can. In a study published in the Journal of Applied Behavior Analysis, Marc Lanovaz of Université de Montréal and Kieva Hranchuk of St. Lawrence College, in Ontario, make a case for using AI in treating behavioral problems. "Medical and educational professionals frequently disagree on the effectiveness of behavioral interventions, which may cause people to receive inadequate treatment," said Lanovaz, an associate professor who heads the Applied Behavioral Research Lab at UdeM's School of Psychoeducation.
- North America > Canada > Quebec > Montreal (0.27)
- North America > Canada > Ontario (0.27)
- Europe > Ukraine > Kyiv Oblast > Kyiv (0.27)