Goto

Collaborating Authors

 social responsibility


A Detailed Study on LLM Biases Concerning Corporate Social Responsibility and Green Supply Chains

Ontrup, Greta, Bush, Annika, Pauly, Markus, Aksoy, Meltem

arXiv.org Artificial Intelligence

Organizations increasingly use Large Language Models (LLMs) to improve supply chain processes and reduce environmental impacts. However, LLMs have been shown to reproduce biases regarding the prioritization of sustainable business strategies. Thus, it is important to identify underlying training data biases that LLMs pertain regarding the importance and role of sustainable business and supply chain practices. This study investigates how different LLMs respond to validated surveys about the role of ethics and responsibility for businesses, and the importance of sustainable practices and relations with suppliers and customers. Using standardized questionnaires, we systematically analyze responses generated by state-of-the-art LLMs to identify variations. We further evaluate whether differences are augmented by four organizational culture types, thereby evaluating the practical relevance of identified biases. The findings reveal significant systematic differences between models and demonstrate that organizational culture prompts substantially modify LLM responses. The study holds important implications for LLM-assisted decision-making in sustainability contexts.


Artificial Intelligence, Social Responsibility, and the Roles of the University

Communications of the ACM

Technologies that use artificial intelligence (AI) have become ubiquitous. AI technologies have produced numerous economic and social benefits, such as rapidly and reliably assisting radiologists with accurate diagnostic interpretations of medical images. Many harms of AI have also been documented, such as racial biases in predictive models used in the criminal justice system, and gender discrimination in automated screening of job applications. Some AI technologies have exacerbated biases that disproportionately affect historically marginalized communities, such as LGBTQ populations and members of racial, ethnic, and religious minorities.4 Generative AI technologies are now widely available, and the potential harms are substantial: although anyone can use ChatGPT to draft messages and DALL-E to create artwork, others can use these tools to quickly produce deceptive news stories with specious images--misinformation that can spread quickly through social media.


How Do We Ensure Good Ethics in Building AI Platforms?

#artificialintelligence

Artificial intelligence (AI) is rapidly changing the way we live, work, and interact with each other. It has the potential to improve healthcare, education, transportation, and other critical areas of society. However, the development and use of AI raise ethical concerns about privacy, bias, accountability, and transparency. How do we ensure good ethics in building AI platforms? In this blog, we will explore some strategies for building AI platforms with ethical principles.


Sentiment Analysis of ESG disclosures on Stock Market

Bapat, Sudeep R., Kothari, Saumya, Bansal, Rushil

arXiv.org Artificial Intelligence

In this paper, we look at the impact of Environment, Social and Governance related news articles and social media data on the stock market performance. We pick four stocks of companies which are widely known in their domain to understand the complete effect of ESG as the newly opted investment style remains restricted to only the stocks with widespread information. We summarise live data of both twitter tweets and newspaper articles and create a sentiment index using a dictionary technique based on online information for the month of July, 2022. We look at the stock price data for all the four companies and calculate the percentage change in each of them. We also compare the overall sentiment of the company to its percentage change over a specific historical period.


Marketing & Strategy: Trends for 2022 and Beyond

#artificialintelligence

The marketing landscape is, as always, in a state of flux. Last year's most important trends have not even matured yet and everyone is already jumping on the latest hype bandwagon. So which developments are really fundamental? What do you really need to take into account when drawing up and fine-tuning your marketing strategy? We have done our research and compiled a list of the trends we want to consider for ourselves and our customers in 2022. What is fundamentally changing in the field of marketing?


Rob Reich: AI developers need a code of responsible conduct

#artificialintelligence

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Rob Reich wears many hats: political philosopher, director of the McCoy Family Center for Ethics in Society, and associate director of the Stanford Institute for Human-Centered Artificial Intelligence. In recent years, Reich has delved deeply into the ethical and political issues posed by revolutionary technological advances in artificial intelligence (AI). His work is not always easy for technologists to hear. In his book, System Error: Where Big Tech Went Wrong and How We Can Reboot, Reich and his co-authors (computer scientist Mehran Sahami and social scientist Jeremy M. Weinstein) argued that tech companies and developers are so fixated on "optimization" that they often trample on human values.


C-level executives should be responsible AI ethics in organizations

#artificialintelligence

AI ethics has always been a topic of concern for most organizations hoping to leverage the technology in some use cases. While AI has improved over the years, the reality is that AI has become integral to products and services, with some organizations now looking to develop AI codes of ethics. While the whole notion of AI ethics is still debatable in many ways, the use of AI can not be held back, especially with the world becoming increasingly influenced by modern technologies. Last year, UNESCO member states adopted the first-ever global agreement on the Ethics of AI. The guidelines define the common values and principles to guide the construction of necessary legal infrastructure to ensure the healthy development of AI. "Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled. AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected," stated UNESCO.


Shaping Ethical Computing Cultures

Communications of the ACM

Public concern about computer ethics and worry about the social impacts of computing has fomented the "techlash." Newspaper headlines describe company data scandals and breaches; the ways that communication platforms promote social division and radicalization; government surveillance using systems developed by private industry; machine learning algorithms that reify entrenched racism, sexism, cisnormativity, ablism, and homophobia; and mounting concerns about the environmental impact of computing resources. How can we change the field of computing so that ethics is as central a concern as growth, efficiency, and innovation? There is no one intervention to change an entire field: instead, broad change will take a combination of guidelines, governance, and advocacy. None is easy and each raises complex questions, but each approach represents a tool for building an ethical culture of computing.


The nanomafia: nanotechnology's global network of organized crime

#artificialintelligence

The nanotechnology is the science, engineering and technology that are developed to nano-scale, around 1 to 100 nanometers. One of nanotechnology main applications is the nanobots, machines that can construct and handle objects at an atomic level and that are capable of moving through the circulatory system.1 The nanotechnology has become a billionaire industry and since it has multiple potential applications in human beings, there is a great interest in human experimentation. However, the nanotechnology acts at atomic level and for that reason the experimentation in humans is high risk, which causes an evident lack of volunteers. Therefore, the transnational nanotechnology companies would be resorting to criminal methods to get human experimentation subjects; thus, they would be using violence, swindle, extortion and organized crime.2–4 Recent researches reveal evidences that the technological transnational companies, in illicit association with USA, European Community and China governments and the corrupt Latin American governments, have created an organization that is developing mainly in Latin America a secret, forced and illicit neuroscientific human experimentation with invasive neurotechnology, brain nanobots, microchips and implants to execute neuroscientific projects,2–5 which can have even led scientists to win Medicine Nobel Prizes6 based on this illicit human experimentation at the expense of Latin Americans' health.


Socially Responsible AI Algorithms: Issues, Purposes, and Challenges

Cheng, Lu | Varshney, Kush R. (IBM Research -- Thomas J. Watson Research Center) | Liu, Huan (Arizona State University)

Journal of Artificial Intelligence Research

In the current era, people and society have grown increasingly reliant on artificial intelligence (AI) technologies. AI has the potential to drive us towards a future in which all of humanity flourishes. It also comes with substantial risks for oppression and calamity. Discussions about whether we should (re)trust AI have repeatedly emerged in recent years and in many quarters, including industry, academia, healthcare, services, and so on. Technologists and AI researchers have a responsibility to develop trustworthy AI systems. They have responded with great effort to design more responsible AI algorithms. However, existing technical solutions are narrow in scope and have been primarily directed towards algorithms for scoring or classification tasks, with an emphasis on fairness and unwanted bias. To build long-lasting trust between AI and human beings, we argue that the key is to think beyond algorithmic fairness and connect major aspects of AI that potentially cause AI’s indifferent behavior. In this survey, we provide a systematic framework of Socially Responsible AI Algorithms that aims to examine the subjects of AI indifference and the need for socially responsible AI algorithms, define the objectives, and introduce the means by which we may achieve these objectives. We further discuss how to leverage this framework to improve societal well-being through protection, information, and prevention/mitigation. This article appears in the special track on AI & Society.