social good
Good Intentions Beyond ACL: Who Does NLP for Social Good, and Where?
LeFevre, Grace, Zeng, Qingcheng, Leif, Adam, Jewell, Jason, Peskoff, Denis, Voigt, Rob
The social impact of Natural Language Processing (NLP) is increasingly important, with a rising community focus on initiatives related to NLP for Social Good (NLP4SG). Indeed, in recent years, almost 20% of all papers in the ACL Anthology address topics related to social good as defined by the UN Sustainable Development Goals (Adauto et al., 2023). In this study, we take an author- and venue-level perspective to map the landscape of NLP4SG, quantifying the proportion of work addressing social good concerns both within and beyond the ACL community, by both core ACL contributors and non-ACL authors. With this approach we discover two surprising facts about the landscape of NLP4SG. First, ACL authors are dramatically more likely to do work addressing social good concerns when publishing in venues outside of ACL. Second, the vast majority of publications using NLP techniques to address concerns of social good are done by non-ACL authors in venues outside of ACL. We discuss the implications of these findings on agenda-setting considerations for the ACL community related to NLP4SG.
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > Yolo County > Davis (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- Asia > Singapore (0.04)
- Social Sector (1.00)
- Health & Medicine > Pharmaceuticals & Biotechnology (0.68)
The Hardness of Achieving Impact in AI for Social Impact Research: A Ground-Level View of Challenges & Opportunities
Majumdar, Aditya, Zhang, Wenbo, Prawal, Kashvi, Yadav, Amulya
In an attempt to tackle the UN SDGs, AI for Social Impact (AI4SI) projects focus on harnessing AI to address societal issues in areas such as healthcare, social justice, etc. Unfortunately, despite growing interest in AI4SI, achieving tangible, on-the-ground impact remains a significant challenge. For example, identifying and engaging motivated collaborators who are willing to co-design and deploy AI based solutions in real-world settings is often difficult. Even when such partnerships are established, many AI4SI projects "fail" to progress beyond the proof-of-concept stage, and hence, are unable to transition to at-scale production-level solutions. Furthermore, the unique challenges faced by AI4SI researchers are not always fully recognized within the broader AI community, where such work is sometimes viewed as primarily applied and not aligning with the traditional criteria for novelty emphasized in core AI venues. This paper attempts to shine a light on the diverse challenges faced in AI4SI research by diagnosing a multitude of factors that prevent AI4SI partnerships from achieving real-world impact on the ground. Drawing on semi-structured interviews with six leading AI4SI researchers - complemented by the authors' own lived experiences in conducting AI4SI research - this paper attempts to understand the day-to-day difficulties faced in developing and deploying socially impactful AI solutions. Through thematic analysis, we identify structural and organizational, communication, collaboration, and operational challenges as key barriers to deployment. While there are no easy fixes, we synthesize best practices and actionable strategies drawn from these interviews and our own work in this space. In doing so, we hope this paper serves as a practical reference guide for AI4SI researchers and partner organizations seeking to engage more effectively in socially impactful AI collaborations.
- North America > United States > California > Los Angeles County > Los Angeles (0.04)
- Asia > Singapore (0.04)
- Asia > India (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Personal > Interview (1.00)
- Social Sector (1.00)
- Government (1.00)
- Transportation > Infrastructure & Services (0.93)
- (5 more...)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
- Information Technology > Data Science > Data Mining (0.93)
- Information Technology > Artificial Intelligence > Applied AI (0.66)
Zero-shot Persuasive Chatbots with LLM-Generated Strategies and Information Retrieval
Furumai, Kazuaki, Legaspi, Roberto, Vizcarra, Julio, Yamazaki, Yudai, Nishimura, Yasutaka, Semnani, Sina J., Ikeda, Kazushi, Shi, Weiyan, Lam, Monica S.
Persuasion plays a pivotal role in a wide range of applications from health intervention to the promotion of social good. Persuasive chatbots can accelerate the positive effects of persuasion in such applications. Existing methods rely on fine-tuning persuasive chatbots with task-specific training data which is costly, if not infeasible, to collect. To address this issue, we propose a method to leverage the generalizability and inherent persuasive abilities of large language models (LLMs) in creating effective and truthful persuasive chatbot for any given domain in a zero-shot manner. Unlike previous studies which used pre-defined persuasion strategies, our method first uses an LLM to generate responses, then extracts the strategies used on the fly, and replaces any unsubstantiated claims in the response with retrieved facts supporting the strategies. We applied our chatbot, PersuaBot, to three significantly different domains needing persuasion skills: donation solicitation, recommendations, and health intervention. Our experiments on simulated and human conversations show that our zero-shot approach is more persuasive than prior work, while achieving factual accuracy surpassing state-of-the-art knowledge-oriented chatbots. Our study demonstrated that when persuasive chatbots are employed responsibly for social good, it is an enabler of positive individual and social change.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > Canada > Ontario > Toronto (0.04)
- Africa > Mali (0.04)
- (15 more...)
- Social Sector (1.00)
- Health & Medicine > Epidemiology (0.68)
- Health & Medicine > Therapeutic Area > Immunology (0.46)
AIhub coffee corner: Open vs closed science
This month, we consider the debate around open vs closed science. Joining the conversation this time are: Joydeep Biswas (The University of Texas at Austin), Sanmay Das (George Mason University), Tom Dietterich (Oregon State University), Sabine Hauert (University of Bristol) and Sarit Kraus (Bar-Ilan University). Sabine Hauert: There have been many discussions online recently about the topic of open vs closed science. We've seen a lot of people advocating for open AI (not the company, but being open generally, just to clarify!). I was at an event recently in preparation for the AI summit in the UK.
- North America > United States > Texas > Travis County > Austin (0.25)
- North America > United States > Oregon (0.25)
- Europe > United Kingdom (0.25)
AI Safety: Necessary, but insufficient and possibly problematic
This article critically examines the recent hype around AI safety. We first start with noting the nature of the AI safety hype as being dominated by governments and corporations, and contrast it with other avenues within AI research on advancing social good. We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with. We posit that AI safety has a nuanced and uneasy relationship with transparency and other allied notions associated with societal good, indicating that it is an insufficient notion if the goal is that of societal good in a broad sense. We note that the AI safety debate has already influenced some regulatory efforts in AI, perhaps in not so desirable directions. We also share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.
- North America > United States (0.30)
- Europe > United Kingdom (0.29)
- Europe > France (0.05)
- (2 more...)
- Social Sector (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Regional Government > Europe Government (0.47)
Adversarial Machine Learning for Social Good: Reframing the Adversary as an Ally
Al-Maliki, Shawqi, Qayyum, Adnan, Ali, Hassan, Abdallah, Mohamed, Qadir, Junaid, Hoang, Dinh Thai, Niyato, Dusit, Al-Fuqaha, Ala
Deep Neural Networks (DNNs) have been the driving force behind many of the recent advances in machine learning. However, research has shown that DNNs are vulnerable to adversarial examples -- input samples that have been perturbed to force DNN-based models to make errors. As a result, Adversarial Machine Learning (AdvML) has gained a lot of attention, and researchers have investigated these vulnerabilities in various settings and modalities. In addition, DNNs have also been found to incorporate embedded bias and often produce unexplainable predictions, which can result in anti-social AI applications. The emergence of new AI technologies that leverage Large Language Models (LLMs), such as ChatGPT and GPT-4, increases the risk of producing anti-social applications at scale. AdvML for Social Good (AdvML4G) is an emerging field that repurposes the AdvML bug to invent pro-social applications. Regulators, practitioners, and researchers should collaborate to encourage the development of pro-social applications and hinder the development of anti-social ones. In this work, we provide the first comprehensive review of the emerging field of AdvML4G. This paper encompasses a taxonomy that highlights the emergence of AdvML4G, a discussion of the differences and similarities between AdvML4G and AdvML, a taxonomy covering social good-related concepts and aspects, an exploration of the motivations behind the emergence of AdvML4G at the intersection of ML4G and AdvML, and an extensive summary of the works that utilize AdvML4G as an auxiliary tool for innovating pro-social applications. Finally, we elaborate upon various challenges and open research issues that require significant attention from the research community.
The Inventor Behind a Rush of AI Copyright Suits Is Trying to Show His Bot Is Sentient
"A Recent Entrance to Paradise" is a pixelated pastoral scene of train tracks running under a moss-flecked bridge. It was, according to its creator's creator, drawn and named in 2012 by an artificial intelligence called DABUS (Device for the Autonomous Bootstrapping of Unified Sentience). Thaler is appealing the decision. Thaler, a Missouri-based inventor and AI researcher, has become something of a serial litigant on behalf of DABUS. Judges have swatted away similar lawsuits in the European Union, the United States, and, eventually, on appeal, in Australia.
- Oceania > Australia (0.26)
- North America > United States > Missouri (0.26)
- Europe > United Kingdom (0.07)
Adversary for Social Good: Leveraging Adversarial Attacks to Protect Personal Attribute Privacy
Li, Xiaoting, Chen, Lingwei, Wu, Dinghao
Social media has drastically reshaped the world that allows billions of people to engage in such interactive environments to conveniently create and share content with the public. Among them, text data (e.g., tweets, blogs) maintains the basic yet important social activities and generates a rich source of user-oriented information. While those explicit sensitive user data like credentials has been significantly protected by all means, personal private attribute (e.g., age, gender, location) disclosure due to inference attacks is somehow challenging to avoid, especially when powerful natural language processing (NLP) techniques have been effectively deployed to automate attribute inferences from implicit text data. This puts users' attribute privacy at risk. To address this challenge, in this paper, we leverage the inherent vulnerability of machine learning to adversarial attacks, and design a novel text-space Adversarial attack for Social Good, called Adv4SG. In other words, we cast the problem of protecting personal attribute privacy as an adversarial attack formulation problem over the social media text data to defend against NLP-based attribute inference attacks. More specifically, Adv4SG proceeds with a sequence of word perturbations under given constraints such that the probed attribute cannot be identified correctly. Different from the prior works, we advance Adv4SG by considering social media property, and introducing cost-effective mechanisms to expedite attribute obfuscation over text data under the black-box setting. Extensive experiments on real-world social media datasets have demonstrated that our method can effectively degrade the inference accuracy with less computational cost over different attribute settings, which substantially helps mitigate the impacts of inference attacks and thus achieve high performance in user attribute privacy protection.
- Social Sector (1.00)
- Information Technology > Security & Privacy (1.00)
- Government > Military (1.00)
Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index
Singh, Anuradha, Yadav, Jyoti, Shrestha, Sarahana, Varde, Aparna S.
This is a study on the potential widespread usage of alternative fuel vehicles, linking them with the socio-economic status of the respective consumers as well as the impact on the resulting air quality index. Research in this area aims to leverage machine learning techniques in order to promote appropriate policies for the proliferation of alternative fuel vehicles such as electric vehicles with due justice to different population groups. Pearson correlation coefficient is deployed in the modeling the relationships between socio-economic data, air quality index and data on alternative fuel vehicles. Linear regression is used to conduct predictive modeling on air quality index as per the adoption of alternative fuel vehicles, based on socio-economic factors. This work exemplifies artificial intelligence for social good.
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (1.00)
- Social Sector (1.00)
- (3 more...)
Exploring the Role of Natural Language Processing in Enhancing ESG Practices and Assessing Mental Illness
Artificial intelligence (AI) is the field of study focused on creating intelligent machines. Within AI, machine learning (ML) is a subfield that focuses on the ability of machines to learn and adapt based on data input, without requiring explicit programming. In recent years, there has been a growing interest in applying AI to various industries and sectors, as it has the ability to process and analyze large amounts of data quickly and accurately. Following up on my article on AI for Social good- Part 1, I decided to write a part 2. This article will discuss mainly ESG and mental illness to explore the role of Natural Language Processing in Enhancing ESG Practices and Assessing Mental Illness. One area where AI can be particularly impactful is in the field of environmental, social, and governance (ESG) [1] initiatives.
- Social Sector (1.00)
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)