Goto

Collaborating Authors

 ai expert


Enhancing the Interpretability of Rule-based Explanations through Information Retrieval

Umbrico, Alessandro, Bologna, Guido, Coraci, Luca, Fracasso, Francesca, Gola, Silvia, Cortellessa, Gabriella

arXiv.org Artificial Intelligence

The lack of transparency of data-driven Artificial Intelligence techniques limits their interpretability and acceptance into healthcare decision-making processes. We propose an attribution-based approach to improve the interpretability of Explainable AI-based predictions in the specific context of arm lymphedema's risk assessment after lymph nodal radiotherapy in breast cancer. The proposed method performs a statistical analysis of the attributes in the rule-based prediction model using standard metrics from Information Retrieval techniques. This analysis computes the relevance of each attribute to the prediction and provides users with interpretable information about the impact of risk factors. The results of a user study that compared the output generated by the proposed approach with the raw output of the Explainable AI model suggested higher levels of interpretability and usefulness in the context of predicting lymphedema risk.


AI Education in a Mirror: Challenges Faced by Academic and Industry Experts

Akgun, Mahir, Hosseini, Hadi

arXiv.org Artificial Intelligence

As Artificial Intelligence (AI) technologies continue to evolve, the gap between academic AI education and real-world industry challenges remains an important area of investigation. This study provides preliminary insights into challenges AI professionals encounter in both academia and industry, based on semi-structured interviews with 14 AI experts - eight from industry and six from academia. We identify key challenges related to data quality and availability, model scalability, practical constraints, user behavior, and explainability. While both groups experience data and model adaptation difficulties, industry professionals more frequently highlight deployment constraints, resource limitations, and external dependencies, whereas academics emphasize theoretical adaptation and standardization issues. These exploratory findings suggest that AI curricula could better integrate real-world complexities, software engineering principles, and interdisciplinary learning, while recognizing the broader educational goals of building foundational and ethical reasoning skills.


Exclusive: Trump Pushes Out AI Experts Hired By Biden

TIME - Tech

The Trump administration has laid out its own ambitious goals for recruiting more tech talent. On April 3, Russell Vought, Trump's Director of the Office of Management and Budget, released a 25-page memo for how federal leaders were expected to accelerate the government's use of AI. "Agencies should focus recruitment efforts on individuals that have demonstrated operational experience in designing, deploying, and scaling AI systems in high-impact environments," Vought wrote. Putting that into action will be harder than it needed to be, says Deirdre Mulligan, who directed the National Artificial Intelligence Initiative Office in the Biden White House. "The Trump Administration's actions have not only denuded the government of talent now, but I'm sure that for many folks, they will think twice about whether or not they want to work in government," Mulligan says. "It's really important to have stability, to have people's expertise be treated with the level of respect it ought to be and to have people not be wondering from one day to the next whether they're going to be employed."


News at a glance: Trump turmoil, New Zealand's funding overhaul, and an AI expert tripped by AI

Science

Following through on his vows to shake up the U.S. government, President Donald Trump's new administration quickly issued a flurry of executive orders and other decisions, some with big implications for research and global health, sowing worry and confusion among many scientists. The White House this week proposed--and 2 days later rescinded--an unprecedented order to freeze huge chunks of federal spending, including research grants. The 27 January budget memo directed political appointees at every agency to decide whether the funds "conform with administrative priorities" as spelled out in a slew of executive orders Trump has issued since taking office. Despite withdrawing the memo, the White House said agencies must still comply with the executive orders, which ban support for programs that include promoting "Marxist equity, transgenderism, and Green New Deal social engineering policies." A federal judge had already temporarily halted implementation of the memo, which generated a public outcry.


Why do Experts Disagree on Existential Risk and P(doom)? A Survey of AI Experts

Field, Severin

arXiv.org Artificial Intelligence

The development of artificial general intelligence (AGI) is likely to be one of humanity's most consequential technological advancements. Leading AI labs and scientists have called for the global prioritization of AI safety citing existential risks comparable to nuclear war. However, research on catastrophic risks and AI alignment is often met with skepticism, even by experts. Furthermore, online debate over the existential risk of AI has begun to turn tribal (e.g. name-calling such as "doomer" or "accelerationist"). Until now, no systematic study has explored the patterns of belief and the levels of familiarity with AI safety concepts among experts. I surveyed 111 AI experts on their familiarity with AI safety concepts, key objections to AI safety, and reactions to safety arguments. My findings reveal that AI experts cluster into two viewpoints -- an "AI as controllable tool" and an "AI as uncontrollable agent" perspective -- diverging in beliefs toward the importance of AI safety. While most experts (78%) agreed or strongly agreed that "technical AI researchers should be concerned about catastrophic risks", many were unfamiliar with specific AI safety concepts. For example, only 21% of surveyed experts had heard of "instrumental convergence," a fundamental concept in AI safety predicting that advanced AI systems will tend to pursue common sub-goals (such as self-preservation). The least concerned participants were the least familiar with concepts like this, suggesting that effective communication of AI safety should begin with establishing clear conceptual foundations in the field.


Misalignments in AI Perception: Quantitative Findings and Visual Mapping of How Experts and the Public Differ in Expectations and Risks, Benefits, and Value Judgments

Brauner, Philipp, Glawe, Felix, Liehner, Gian Luca, Vervier, Luisa, Ziefle, Martina

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) is transforming diverse societal domains, raising critical questions about its risks and benefits and the misalignments between public expectations and academic visions. This study examines how the general public (N=1110) -- people using or being affected by AI -- and academic AI experts (N=119) -- people shaping AI development -- perceive AI's capabilities and impact across 71 scenarios, including sustainability, healthcare, job performance, societal divides, art, and warfare. Participants evaluated each scenario on four dimensions: expected probability, perceived risk and benefit, and overall sentiment (or value). The findings reveal significant quantitative differences: experts anticipate higher probabilities, perceive lower risks, report greater utility, and express more favorable sentiment toward AI compared to the non-experts. Notably, risk-benefit tradeoffs differ: the public assigns risk half the weight of benefits, while experts assign it only a third. Visual maps of these evaluations highlight areas of convergence and divergence, identifying potential sources of public concern. These insights offer actionable guidance for researchers and policymakers to align AI development with societal values, fostering public trust and informed governance.


No One Is Ready for Digital Immortality

The Atlantic - Technology

Every few years, Hany Farid and his wife have the grim but necessary conversation about their end-of-life plans. They hope to have many more decades together--Farid is 58, and his wife is 38--but they want to make sure they have their affairs in order when the time comes. In addition to discussing burial requests and financial decisions, Farid has recently broached an eerier topic: If he dies first, would his wife want to digitally resurrect him as an AI clone? Farid, an AI expert at UC Berkeley, knows better than most that physical death and digital death are two different things. "My wife has my voice, my likeness, and a lot of my writings," he told me. "She could very easily train a large language model to be an interactive version of me."


Implications for Governance in Public Perceptions of Societal-scale AI Risks

Gruetzemacher, Ross, Pilditch, Toby D., Liang, Huigang, Manning, Christy, Gates, Vael, Moss, David, Elsey, James W. B., Sleegers, Willem W. A., Kilian, Kyle

arXiv.org Artificial Intelligence

Amid growing concerns over AI's societal risks--ranging from civilizational collapse to misinformation and systemic bias--this study explores the perceptions of AI experts and the general US registered voters on the likelihood and impact of 18 specific AI risks, alongside their policy preferences for managing these risks. While both groups favor international oversight over national or corporate governance, our survey reveals a discrepancy: voters perceive AI risks as both more likely and more impactful than experts, and also advocate for slower AI development. Specifically, our findings indicate that policy interventions may best assuage collective concerns if they attempt to more carefully balance mitigation efforts across all classes of societal-scale risks, effectively nullifying the near-vs-long-term debate over AI risks. More broadly, our results will serve not only to enable more substantive policy discussions for preventing and mitigating AI risks, but also to underscore the challenge of consensus building for effective policy implementation.


AI expert: ChatGPT prompts you'll wish you knew sooner

FOX News

'The Big Weekend Show' analyzes the possibilities of artificial intelligence when it comes to influencing voters. ChatGPT has changed my life -- and yours, even if you don't use it as much as I do. You've probably noticed the new AI search bar in all the Meta apps, including Facebook and Instagram. It won't be long before all your most-used apps and services integrate chatbots. I'm giving one away to someone who tries my free daily tech newsletter.


Contact Complexity in Customer Service

Pi, Shu-Ting, Yang, Michael, Liu, Qun

arXiv.org Artificial Intelligence

Customers who reach out for customer service support may face a range of issues that vary in complexity. Routing high-complexity contacts to junior agents can lead to multiple transfers or repeated contacts, while directing low-complexity contacts to senior agents can strain their capacity to assist customers who need professional help. To tackle this, a machine learning model that accurately predicts the complexity of customer issues is highly desirable. However, defining the complexity of a contact is a difficult task as it is a highly abstract concept. While consensus-based data annotation by experienced agents is a possible solution, it is time-consuming and costly. To overcome these challenges, we have developed a novel machine learning approach to define contact complexity. Instead of relying on human annotation, we trained an AI expert model to mimic the behavior of agents and evaluate each contact's complexity based on how the AI expert responds. If the AI expert is uncertain or lacks the skills to comprehend the contact transcript, it is considered a high-complexity contact. Our method has proven to be reliable, scalable, and cost-effective based on the collected data.