ethics committee
'Unethical' AI research on Reddit under fire
A study that used artificial intelligence–generated content to "participate" in online discussions and test whether AI was more successful at changing people's minds than human-generated content has caused an uproar because of ethical concerns about the work. This week some of the unwitting research participants publicly asked the University of Zürich (UZH), where the researchers behind the experiment hold positions, to investigate and apologize. "I think people have a reasonable expectation to not be in scientific experiments without their consent," says Casey Fiesler, an expert on internet research ethics at the University of Colorado Boulder. A university statement emailed to Science says the researchers--who remain anonymous--have decided not to publish their results. The university will investigate the incident, the statement says.
- Europe > Switzerland > Zürich > Zürich (0.26)
- North America > United States > Colorado > Boulder County > Boulder (0.25)
- North America > United States > Massachusetts (0.05)
- (3 more...)
- Research Report > Experimental Study (0.32)
- Research Report > New Finding (0.30)
- Law (0.99)
- Media > News (0.70)
- Information Technology > Security & Privacy (0.49)
Reddit users were subjected to AI-powered experiment without consent
Reddit users who were unwittingly subjected to an AI-powered experiment have hit back at scientists for conducting research on them without permission – and have sparked a wider debate about such experiments. The social media site Reddit is split into "subreddits" dedicated to a particular community, each with its own volunteer moderators. Members of one subreddit called r/ChangeMyView, because it invites people to discuss potentially contentious issues, were recently informed by the moderators that researchers at the University of Zurich, Switzerland, had been using the site as an online laboratory. The team's experiment seeded more than 1700 comments generated by a variety of large language models (LLMs) into the subreddit, without disclosing they weren't real, to gauge people's reactions. These comments included ones mimicking people who had been raped or pretending to be a trauma counsellor specialising in abuse, among others.
- Europe > Switzerland > Zürich > Zürich (0.62)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
An Audit Framework for Adopting AI-Nudging on Children
Ganapini, Marianna, Panai, Enrico
This is an audit framework for AI-nudging. Unlike the static form of nudging usually discussed in the literature, we focus here on a type of nudging that uses large amounts of data to provide personalized, dynamic feedback and interfaces. We call this AI-nudging (Lanzing, 2019, p. 549; Yeung, 2017). The ultimate goal of the audit outlined here is to ensure that an AI system that uses nudges will maintain a level of moral inertia and neutrality by complying with the recommendations, requirements, or suggestions of the audit (in other words, the criteria of the audit). In the case of unintended negative consequences, the audit suggests risk mitigation mechanisms that can be put in place. In the case of unintended positive consequences, it suggests some reinforcement mechanisms. Sponsored by the IBM-Notre Dame Tech Ethics Lab
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > New York (0.04)
- North America > United States > Indiana > St. Joseph County > Notre Dame (0.04)
- (4 more...)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- (4 more...)
ICMR releases the country's first ethical guidelines for application of AI in biomedical research and healthcare - abtlive
The Indian Council of Medical Research (ICMR) has released the country's first Ethical Guidelines for Application of Artificial Intelligence in Biomedical Research and Healthcare, aimed at creating "an ethics framework which can assist in the development, deployment, and adoption of AI-based solutions" in the fields specified. Through this, they hope to make "AI-assisted platforms available for the benefit of largest section of common people with safety and highest precision possible," while also addressing emerging ethical challenges when it comes to AI in biomedical research and healthcare delivery. The document, prepared by the Department of Health Research and ICMR Artificial Intelligence Cell, Delhi, will be updated as and when the need arises, said a senior Health Ministry official. Developed through extensive discussions with experts and ethicists, the guidelines include sections on ethical principles, guiding principles for stakeholders, an ethics review process, governance of AI use, and informed consent. "It [the document] is intended for all stakeholders involved in research on AI in biomedical research and healthcare, including creators, developers, researchers, clinicians, ethics committees, institutions, sponsors, and funding organizations,'' noted Dr. Rajiv Bahl, director-general, ICMR.
- Health & Medicine (1.00)
- Information Technology > Security & Privacy (0.39)
Why you need an organizational AI ethics committee to do AI right
Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Artificial intelligence (AI) may still feel a bit futuristic to many, but the average consumer would be surprised at where AI can be found. It's no longer a science fiction concept confined to Hollywood and feature movies or top-secret technology only found in computer science labs at the Googles and Metas of the world--quite the contrary. Today, AI is not only behind many of our online shopping and social media recommendations, customer service inquiries and loan approvals, but it's also actively creating music, winning art contests and beating humans in games that have existed for thousands of years. Due to this growing awareness gap surrounding AI's expansive capabilities, a critical first step for any organization or business that uses or provides it should be forming an AI ethics committee.
GDPR Compliant Collection of Therapist-Patient-Dialogues
Mayer, Tobias, Warikoo, Neha, Grimm, Oliver, Reif, Andreas, Gurevych, Iryna
According to the Global Burden of Disease list provided by the World Health Organization (WHO), mental disorders are among the most debilitating disorders.To improve the diagnosis and the therapy effectiveness in recent years, researchers have tried to identify individual biomarkers. Gathering neurobiological data however, is costly and time-consuming. Another potential source of information, which is already part of the clinical routine, are therapist-patient dialogues. While there are some pioneering works investigating the role of language as predictors for various therapeutic parameters, for example patient-therapist alliance, there are no large-scale studies. A major obstacle to conduct these studies is the availability of sizeable datasets, which are needed to train machine learning models. While these conversations are part of the daily routine of clinicians, gathering them is usually hindered by various ethical (purpose of data usage), legal (data privacy) and technical (data formatting) limitations. Some of these limitations are particular to the domain of therapy dialogues, like the increased difficulty in anonymisation, or the transcription of the recordings. In this paper, we elaborate on the challenges we faced in starting our collection of therapist-patient dialogues in a psychiatry clinic under the General Data Privacy Regulation of the European Union with the goal to use the data for Natural Language Processing (NLP) research. We give an overview of each step in our procedure and point out the potential pitfalls to motivate further research in this field.
- Europe > Germany > Hesse > Darmstadt Region > Darmstadt (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
Why You Need an AI Ethics Committee
Artificial intelligence poses a lot of ethical risks to businesses: It may promote bias, lead to invasions of privacy, and in the case of self-driving cars, even cause deadly accidents. Because AI is built to operate at scale, when a problem occurs, the impact is huge. Consider the AI that many health systems were using to spot high-risk patients in need of follow-up care. Researchers found that only 18% of the patients identified by the AI were Black—even though Black people accounted for 46% of the sickest patients. And the discriminatory AI was applied to at least 100 million patients. The sources of problems in AI are many. For starters, the data used to train it may reflect historical bias. The health systems’ AI was trained with data showing that Black people received fewer health care resources, leading the algorithm to infer that they needed less help. The data may undersample certain subpopulations. Or the wrong goal may be set for the AI. Such issues aren’t easy to address, and they can’t be remedied with a technical fix. You need a committee—comprising ethicists, lawyers, technologists, business strategists, and bias scouts—to review any AI your firm develops or buys to identify the ethical risks it presents and address how to mitigate them. This article describes how to set up such a committee effectively.
- North America > United States > Florida > Broward County (0.04)
- Asia > India (0.04)
Money and mind control: Big Tech slams ethics brakes on AI
SAN FRANCISCO (REUTERS) - In September last year, Google's cloud unit looked into using artificial intelligence (AI) to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analysing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three US technology giants. Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.
- Europe (0.30)
- North America > United States > California > San Francisco County > San Francisco (0.25)
- Information Technology > Security & Privacy (0.48)
- Banking & Finance > Loans (0.36)
- Information Technology > Services (0.32)
INSIGHT: How Big Tech Ethics Panels Are Putting Brakes on AI
In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analyzing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three U.S. technology giants. Reported here for the first time, their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.
- Europe (0.30)
- North America > United States > Illinois (0.05)
- Information Technology > Security & Privacy (0.48)
- Government > Regional Government (0.48)
- Banking & Finance > Loans (0.36)
- Information Technology > Services (0.32)
The ethics of AI: Should we put our faith in Big Tech?
In September last year, Google's cloud unit looked into using artificial intelligence to help a financial firm decide whom to lend money to. It turned down the client's idea after weeks of internal discussions, deeming the project too ethically dicey because the AI technology could perpetuate biases like those around race and gender. Since early last year, Google has also blocked new AI features analysing emotions, fearing cultural insensitivity, while Microsoft restricted software mimicking voices and IBM rejected a client request for an advanced facial-recognition system. All these technologies were curbed by panels of executives or other leaders, according to interviews with AI ethics chiefs at the three US technology giants. Reuters reported for the first time their vetoes and the deliberations that led to them reflect a nascent industry-wide drive to balance the pursuit of lucrative AI systems with a greater consideration of social responsibility.
- Europe (0.30)
- North America > United States > Illinois (0.05)
- Information Technology > Security & Privacy (0.49)
- Government > Regional Government (0.49)
- Banking & Finance > Loans (0.36)