negative impact
OpenAI Staffer Quits, Alleging Company's Economic Research Is Drifting Into AI Advocacy
OpenAI Staffer Quits, Alleging Company's Economic Research Is Drifting Into AI Advocacy Four sources close to the situation claim OpenAI has become hesitant to publish research on the negative impact of AI. The company says it has only expanded the economic research team's scope. OpenAI has allegedly become more guarded about publishing research that highlights the potentially negative impact that AI could have on the economy, four people familiar with the matter tell WIRED. The perceived pullback has contributed to the departure of at least two employees on OpenAI's economic research team in recent months, according to the same four people, who spoke to WIRED on the condition of anonymity. One of these employees, Tom Cunningham, left the company entirely in September after concluding it had become difficult to publish high-quality research, WIRED has learned.
- North America > United States > New York (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Europe > Slovakia (0.05)
- Europe > Czechia (0.05)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
Irresponsible AI: big tech's influence on AI research and associated impacts
Hernandez-Garcia, Alex, Volokhova, Alexandra, Williams, Ezekiel, Kabakibo, Dounia Shaaban
The accelerated development, deployment and adoption of artificial intelligence systems has been fuelled by the increasing involvement of big tech. This has been accompanied by increasing ethical concerns and intensified societal and environmental impacts. In this article, we review and discuss how these phenomena are deeply entangled. First, we examine the growing and disproportionate influence of big tech in AI research and argue that its drive for scaling and general-purpose systems is fundamentally at odds with the responsible, ethical, and sustainable development of AI. Second, we review key current environmental and societal negative impacts of AI and trace their connections to big tech and its underlying economic incentives. Finally, we argue that while it is important to develop technical and regulatory approaches to these challenges, these alone are insufficient to counter the distortion introduced by big tech's influence. We thus review and propose alternative strategies that build on the responsibility of implicated actors and collective action.
- North America > Canada > Quebec > Montreal (0.05)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- Asia > Middle East > Israel (0.05)
- (5 more...)
- Overview (1.00)
- Research Report (0.82)
- Law (1.00)
- Energy (0.88)
- Information Technology > Services (0.69)
- Government > Military (0.46)
AI Security Map: Holistic Organization of AI Security Technologies and Impacts on Stakeholders
Kato, Hiroya, Kita, Kentaro, Hasegawa, Kento, Hidano, Seira
As the social implementation of AI has been steadily progressing, research and development related to AI security has also been increasing. However, existing studies have been limited to organizing related techniques, attacks, defenses, and risks in terms of specific domains or AI elements. Thus, it extremely difficult to understand the relationships among them and how negative impacts on stakeholders are brought about. In this paper, we argue that the knowledge, technologies, and social impacts related to AI security should be holistically organized to help understand relationships among them. To this end, we first develop an AI security map that holistically organizes interrelationships among elements related to AI security as well as negative impacts on information systems and stakeholders. This map consists of the two aspects, namely the information system aspect (ISA) and the external influence aspect (EIA). The elements that AI should fulfill within information systems are classified under the ISA. The EIA includes elements that affect stakeholders as a result of AI being attacked or misused. For each element, corresponding negative impacts are identified. By referring to the AI security map, one can understand the potential negative impacts, along with their causes and countermeasures. Additionally, our map helps clarify how the negative impacts on AI-based systems relate to those on stakeholders. We show some findings newly obtained by referring to our map. We also provide several recommendations and open problems to guide future AI security communities.
- North America > United States (0.95)
- Asia > Japan > Honshū > Kantō > Saitama Prefecture > Saitama (0.04)
- Research Report (1.00)
- Overview (1.00)
Climate land use and other drivers impacts on island ecosystem services: a global review
Moustakas, Aristides, Zemah-Shamir, Shiri, Tase, Mirela, Zotos, Savvas, Demirel, Nazli, Zoumides, Christos, Christoforidi, Irene, Dindaroglu, Turgay, Albayrak, Tamer, Ayhan, Cigdem Kaptan, Fois, Mauro, Manolaki, Paraskevi, Sandor, Attila D., Sieber, Ina, Stamatiadou, Valentini, Tzirkalli, Elli, Vogiatzakis, Ioannis N., Zemah-Shamir, Ziv, Zittis, George
Islands are diversity hotspots and vulnerable to environmental degradation, climate variations, land use changes and societal crises. These factors can exhibit interactive impacts on ecosystem services. The study reviewed a large number of papers on the climate change-islands-ecosystem services topic worldwide. Potential inclusion of land use changes and other drivers of impacts on ecosystem services were sequentially also recorded. The study sought to investigate the impacts of climate change, land use change, and other non-climatic driver changes on island ecosystem services. Explanatory variables examined were divided into two categories: environmental variables and methodological ones. Environmental variables include sea zone geographic location, ecosystem, ecosystem services, climate, land use, other driver variables, Methodological variables include consideration of policy interventions, uncertainty assessment, cumulative effects of climate change, synergistic effects of climate change with land use change and other anthropogenic and environmental drivers, and the diversity of variables used in the analysis. Machine learning and statistical methods were used to analyze their effects on island ecosystem services. Negative climate change impacts on ecosystem services are better quantified by land use change or other non-climatic driver variables than by climate variables. The synergy of land use together with climate changes is modulating the impact outcome and critical for a better impact assessment. Analyzed together, there is little evidence of more pronounced for a specific sea zone, ecosystem, or ecosystem service. Climate change impacts may be underestimated due to the use of a single climate variable deployed in most studies. Policy interventions exhibit low classification accuracy in quantifying impacts indicating insufficient efficacy or integration in the studies.
- Asia > Middle East > Republic of Türkiye (0.93)
- North America > Canada (0.46)
- Africa (0.46)
- (14 more...)
From Lived Experience to Insight: Unpacking the Psychological Risks of Using AI Conversational Agents
Chandra, Mohit, Naik, Suchismita, Ford, Denae, Okoli, Ebele, De Choudhury, Munmun, Ershadi, Mahsa, Ramos, Gonzalo, Hernandez, Javier, Bhattacharjee, Ananya, Warreth, Shahed, Suh, Jina
Recent gain in popularity of AI conversational agents has led to their increased use for improving productivity and supporting well-being. While previous research has aimed to understand the risks associated with interactions with AI conversational agents, these studies often fall short in capturing the lived experiences. Additionally, psychological risks have often been presented as a sub-category within broader AI-related risks in past taxonomy works, leading to under-representation of the impact of psychological risks of AI use. To address these challenges, our work presents a novel risk taxonomy focusing on psychological risks of using AI gathered through lived experience of individuals. We employed a mixed-method approach, involving a comprehensive survey with 283 individuals with lived mental health experience and workshops involving lived experience experts to develop a psychological risk taxonomy. Our taxonomy features 19 AI behaviors, 21 negative psychological impacts, and 15 contexts related to individuals. Additionally, we propose a novel multi-path vignette based framework for understanding the complex interplay between AI behaviors, psychological impacts, and individual user contexts. Finally, based on the feedback obtained from the workshop sessions, we present design recommendations for developing safer and more robust AI agents. Our work offers an in-depth understanding of the psychological risks associated with AI conversational agents and provides actionable recommendations for policymakers, researchers, and developers.
- North America > Canada > Ontario > Toronto (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- Asia > Middle East > Jordan (0.04)
- (7 more...)
- Questionnaire & Opinion Survey (1.00)
- Overview (1.00)
- Research Report > New Finding (0.92)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology > Mental Health (1.00)
- Health & Medicine > Consumer Health (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.93)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Personal Assistant Systems (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
Towards Leveraging News Media to Support Impact Assessment of AI Technologies
Allaham, Mowafak, Kieslich, Kimon, Diakopoulos, Nicholas
Expert-driven frameworks for impact assessments (IAs) may inadvertently overlook the effects of AI technologies on the public's social behavior, policy, and the cultural and geographical contexts shaping the perception of AI and the impacts around its use. This research explores the potentials of fine-tuning LLMs on negative impacts of AI reported in a diverse sample of articles from 266 news domains spanning 30 countries around the world to incorporate more diversity into IAs. Our findings highlight (1) the potential of fine-tuned open-source LLMs in supporting IA of AI technologies by generating high-quality negative impacts across four qualitative dimensions: coherence, structure, relevance, and plausibility, and (2) the efficacy of small open-source LLM (Mistral-7B) fine-tuned on impacts from news media in capturing a wider range of categories of impacts that GPT-4 had gaps in covering.
- North America > Canada (0.04)
- Oceania > Australia (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (11 more...)
- Media > News (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- (2 more...)
Stars, Stripes, and Silicon: Unravelling the ChatGPT's All-American, Monochrome, Cis-centric Bias
This paper investigates the challenges associated with bias, toxicity, unreliability, and lack of robustness in large language models (LLMs) such as ChatGPT. It emphasizes that these issues primarily stem from the quality and diversity of data on which LLMs are trained, rather than the model architectures themselves. As LLMs are increasingly integrated into various real-world applications, their potential to negatively impact society by amplifying existing biases and generating harmful content becomes a pressing concern. The paper calls for interdisciplinary efforts to address these challenges. Additionally, it highlights the need for collaboration between researchers, practitioners, and stakeholders to establish governance frameworks, oversight, and accountability mechanisms to mitigate the harmful consequences of biased LLMs.
- North America > United States > California (0.04)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Research Report (1.00)
- Overview (0.68)
- Health & Medicine (1.00)
- Media > News (0.69)
- Information Technology > Security & Privacy (0.46)
Mitigating the Negative Impact of Over-association for Conversational Query Production
Wang, Ante, Song, Linfeng, Min, Zijun, Xu, Ge, Wang, Xiaoli, Yao, Junfeng, Su, Jinsong
Conversational query generation aims at producing search queries from dialogue histories, which are then used to retrieve relevant knowledge from a search engine to help knowledge-based dialogue systems. Trained to maximize the likelihood of gold queries, previous models suffer from the data hunger issue, and they tend to both drop important concepts from dialogue histories and generate irrelevant concepts at inference time. We attribute these issues to the over-association phenomenon where a large number of gold queries are indirectly related to the dialogue topics, because annotators may unconsciously perform reasoning with their background knowledge when generating these gold queries. We carefully analyze the negative effects of this phenomenon on pretrained Seq2seq query producers and then propose effective instance-level weighting strategies for training to mitigate these issues from multiple perspectives. Experiments on two benchmarks, Wizard-of-Internet and DuSinc, show that our strategies effectively alleviate the negative effects and lead to significant performance gains (2%-5% across automatic metrics and human evaluation). Further analysis shows that our model selects better concepts from dialogue histories and is 10 times more data efficient than the baseline. The code is available at https://github.com/DeepLearnXMU/QG-OverAsso.
- Asia > China > Fujian Province > Xiamen (0.05)
- North America > United States > Washington > King County > Bellevue (0.04)
- Asia > Indonesia > Bali (0.04)
- Asia > China > Fujian Province > Fuzhou (0.04)
The ethical situation of DALL-E 2
Hogea, Eduard, Rocafortf, Josem
A hot topic of Artificial Intelligence right now is image generation from prompts. DALL-E 2 is one of the biggest names in this domain, as it allows people to create images from simple text inputs, to even more complicated ones. The company that made this possible, OpenAI, has assured everyone that visited their website that "Our mission is to ensure that artificial general intelligence benefits all humanity". A noble idea in our opinion, that also stood as the motive behind us choosing this subject. This paper analyzes the ethical implications of an AI image generative system, with an emphasis on how society is responding to it, how it probably will and how it should if all the right measures are taken.
- Health & Medicine (1.00)
- Education (1.00)
- Information Technology > Security & Privacy (0.94)
Societal Adaptation to Advanced AI
Bernardi, Jamie, Mukobi, Gabriel, Greaves, Hilary, Heim, Lennart, Anderljung, Markus
Existing strategies for managing risks from advanced AI systems often focus on affecting what AI systems are developed and how they diffuse. However, this approach becomes less feasible as the number of developers of advanced AI grows, and impedes beneficial use-cases as well as harmful ones. In response, we urge a complementary approach: increasing societal adaptation to advanced AI, that is, reducing the expected negative impacts from a given level of diffusion of a given AI capability. We introduce a conceptual framework which helps identify adaptive interventions that avoid, defend against and remedy potentially harmful uses of AI systems, illustrated with examples in election manipulation, cyberterrorism, and loss of control to AI decision-makers. We discuss a three-step cycle that society can implement to adapt to AI. Increasing society's ability to implement this cycle builds its resilience to advanced AI. We conclude with concrete recommendations for governments, industry, and third-parties.
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- Government > Voting & Elections (1.00)
- (6 more...)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- (2 more...)