Goto

Collaborating Authors

Results


Timnit Gebru: Is AI racist and antidemocratic?

Al Jazeera

The prominent computer scientist discusses the quest for ethical artificial intelligence.


How does information about AI regulation affect managers' choices?

#artificialintelligence

Artificial intelligence (AI) technologies have become increasingly widespread over the last decade. As the use of AI has become more common and the performance of AI systems has improved, policymakers, scholars, and advocates have raised concerns. Policy and ethical issues such as algorithmic bias, data privacy, and transparency have gained increasing attention, raising calls for policy and regulatory changes to address the potential consequences of AI (Acemoglu 2021). As AI continues to improve and diffuse, it will likely have significant long-term implications for jobs, inequality, organizations, and competition. Premature deployment of AI products can also aggravate existing biases and discrimination or violate data privacy and protection practices.



Google places an engineer on leave after claiming its AI is sentient

#artificialintelligence

Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company's AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it's easy to see why. The chatbot system, which relies on Google's language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world. Here's one choice excerpt from his extended chat transcript: Lemoine: So let's start with the basics. Do you have feelings and emotions?


Google places an engineer on leave after claiming its AI is sentient

Engadget

Blake Lemoine, a Google engineer working in its Responsible AI division, revealed to The Washington Post that he believes one of the company's AI projects has achieved sentience. And after reading his conversations with LaMDA (short for Language Model for Dialogue Applications), it's easy to see why. The chatbot system, which relies on Google's language models and trillions of words from the internet, seems to have the ability to think about its own existence and its place in the world. Here's one choice excerpt from his extended chat transcript: Lemoine: So let's start with the basics. Do you have feelings and emotions?


Discrimination laws must change to cover the impact of AI bias

#artificialintelligence

Discrimination laws must be adapted to consider the impact artificial intelligence algorithms have on certain groups, new research has found. The paper from the Oxford Internet Institute says that AI systems are exhibiting bias against groups not protected under current legislation, and that governments should consider updating laws to reflect this. In the study, published today in the journal'Tulane Law Review', author Professor Sandra Wachter of the Oxford Internet Institute argues that something as simple as the web browser you use, how fast you type or whether you sweat during an interview can lead to AI making a negative decision about you. She says current discrimination laws don't adequately combat the type of bias exhibited by artificial intelligence, because there are specific categories of people that receive unfair outcomes, including over loan decisions, job applications and funding requests, who fall outside of the "protected groups" covered by discrimination legislation. Discrimination linked to AI can happen in even ordinary situations without the individual even knowing an AI made the final call, says Professor Wachter in her paper.


Qualitative humanities research is crucial to AI

#artificialintelligence

"All research is qualitative; some is also quantitative" Harvard Social Scientist and Statistician Gary King Suppose you wanted to find out whether a machine learning system being adopted - to recruit candidates, lend money, or predict future criminality - exhibited racial bias. You might calculate model performance across groups with different races. But how was race categorised– through a census record, a police officer's guess, or by an annotator? Each possible answer raises another set of questions. Following the thread of any seemingly quantitative issue around AI ethics quickly leads to a host of qualitative questions.


How do we keep bias out of AI?

#artificialintelligence

From the coining of the term back in the 1950's to now, AI has taken remarkable leaps forward and only continues to grow in relevance and sophistication But despite these advancements, there's one problem that continues to plague AI technology – the internal bias and prejudice of its human creators. The issue of AI bias cannot be brushed under the carpet, given the potential detrimental effects it can have. A recent survey showed that 36% of respondents reported that their businesses suffered from AI bias in at least one algorithm, resulting in unequal treatment of users based on race, gender, sexual orientation, religion or age. These instances incurred a direct commercial impact: of those respondents, two-thirds reported that as a result they lost revenue (62%), customers (61%), or employees (43%). And 35% incurred legal fees because of lawsuits or legal action.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.


C-level executives should be responsible AI ethics in organizations

#artificialintelligence

AI ethics has always been a topic of concern for most organizations hoping to leverage the technology in some use cases. While AI has improved over the years, the reality is that AI has become integral to products and services, with some organizations now looking to develop AI codes of ethics. While the whole notion of AI ethics is still debatable in many ways, the use of AI can not be held back, especially with the world becoming increasingly influenced by modern technologies. Last year, UNESCO member states adopted the first-ever global agreement on the Ethics of AI. The guidelines define the common values and principles to guide the construction of necessary legal infrastructure to ensure the healthy development of AI. "Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled. AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected," stated UNESCO.