Goto

Collaborating Authors

Results


Qualitative humanities research is crucial to AI

#artificialintelligence

"All research is qualitative; some is also quantitative" Harvard Social Scientist and Statistician Gary King Suppose you wanted to find out whether a machine learning system being adopted - to recruit candidates, lend money, or predict future criminality - exhibited racial bias. You might calculate model performance across groups with different races. But how was race categorised– through a census record, a police officer's guess, or by an annotator? Each possible answer raises another set of questions. Following the thread of any seemingly quantitative issue around AI ethics quickly leads to a host of qualitative questions.


How do we keep bias out of AI?

#artificialintelligence

From the coining of the term back in the 1950's to now, AI has taken remarkable leaps forward and only continues to grow in relevance and sophistication But despite these advancements, there's one problem that continues to plague AI technology – the internal bias and prejudice of its human creators. The issue of AI bias cannot be brushed under the carpet, given the potential detrimental effects it can have. A recent survey showed that 36% of respondents reported that their businesses suffered from AI bias in at least one algorithm, resulting in unequal treatment of users based on race, gender, sexual orientation, religion or age. These instances incurred a direct commercial impact: of those respondents, two-thirds reported that as a result they lost revenue (62%), customers (61%), or employees (43%). And 35% incurred legal fees because of lawsuits or legal action.


Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches

Journal of Artificial Intelligence Research

This survey article assesses and compares existing critiques of current fairness-enhancing technical interventions in machine learning (ML) that draw from a range of non-computing disciplines, including philosophy, feminist studies, critical race and ethnic studies, legal studies, anthropology, and science and technology studies. It bridges epistemic divides in order to offer an interdisciplinary understanding of the possibilities and limits of hegemonic computational approaches to ML fairness for producing just outcomes for society's most marginalized. The article is organized according to nine major themes of critique wherein these different fields intersect: 1) how "fairness" in AI fairness research gets defined; 2) how problems for AI systems to address get formulated; 3) the impacts of abstraction on how AI tools function and its propensity to lead to technological solutionism; 4) how racial classification operates within AI fairness research; 5) the use of AI fairness measures to avoid regulation and engage in ethics washing; 6) an absence of participatory design and democratic deliberation in AI fairness considerations; 7) data collection practices that entrench "bias," are non-consensual, and lack transparency; 8) the predatory inclusion of marginalized groups into AI systems; and 9) a lack of engagement with AI's long-term social and ethical outcomes. Drawing from these critiques, the article concludes by imagining future ML fairness research directions that actively disrupt entrenched power dynamics and structural injustices in society.


C-level executives should be responsible AI ethics in organizations

#artificialintelligence

AI ethics has always been a topic of concern for most organizations hoping to leverage the technology in some use cases. While AI has improved over the years, the reality is that AI has become integral to products and services, with some organizations now looking to develop AI codes of ethics. While the whole notion of AI ethics is still debatable in many ways, the use of AI can not be held back, especially with the world becoming increasingly influenced by modern technologies. Last year, UNESCO member states adopted the first-ever global agreement on the Ethics of AI. The guidelines define the common values and principles to guide the construction of necessary legal infrastructure to ensure the healthy development of AI. "Emerging technologies such as AI have proven their immense capacity to deliver for good. However, its negative impacts that are exacerbating an already divided and unequal world, should be controlled. AI developments should abide by the rule of law, avoiding harm, and ensuring that when harm happens, accountability and redressal mechanisms are at hand for those affected," stated UNESCO.


We used game theory to determine which AI projects should be regulated

#artificialintelligence

Ever since artificial intelligence (AI) made the transition from theory to reality, research and development centers across the world have been rushing to come up with the next big AI breakthrough. This competition is sometimes called the "AI race". In practice, though, there are hundreds of "AI races" heading towards different objectives. Some research centers are racing to produce digital marketing AI, for example, while others are racing to pair AI with military hardware. Some races are between private companies and others are between countries.


Regulating AI Through Data Privacy

Stanford HAI

In the absence of a national data privacy law in the U.S., California has been more active than any other state in efforts to fill the gap on a state level. The state enacted one of the nation's first data privacy laws, the California Privacy Rights Act (Proposition 24) in 2020, and an additional law will take effect in 2023. A new state agency created by the law, the California Privacy Protection Agency, recently issued an invitation for public comment on the many open questions surrounding the law's implementation. Our team of Stanford researchers, graduate students, and undergraduates examined the proposed law and have concluded that data privacy can be a useful tool in regulating AI, but California's new law must be more narrowly tailored to prevent overreach, focus more on AI model transparency, and ensure people's rights to delete their personal information are not usurped by the use of AI. Additionally, we suggest that the regulation's proposed transparency provision requiring companies to explain to consumers the logic underlying their "automated decision making" processes could be more powerful if it instead focused on providing greater transparency about the data used to enable such processes. Finally, we argue that the data embedded in machine-learning models must be explicitly included when considering consumers' rights to delete, know, and correct their data.


AI researcher says police tech suppliers are hostile to transparency

#artificialintelligence

Artificial intelligence (AI) researcher Sandra Wachter says that although the House of Lords inquiry into police technology "was a great step in the right direction" and succeeded in highlighting the major concerns around police AI and algorithms, the conflict of interest between criminal justice bodies and their suppliers could still hold back meaningful change. Wachter, who was invited to the inquiry as an expert witness, is an associate professor and senior research fellow at the Oxford Internet Institute who specialises in the law and ethics of AI. Speaking with Computer Weekly, Wachter said she is hopeful that at least some of the recommendations will be taken forward into legislation, but is worried about the impact of AI suppliers' hostility to transparency and openness. "I am worried about it mainly from the perspective of intellectual property and trade secrets," she said. "There is an unwillingness or hesitation in the private sector to be completely open about what is actually going on for various reasons, and I think that might be a barrier to implementing the inquiry's recommendations."


Artificial Intelligence Can Help Combat Systemic Bias

#artificialintelligence

Facial recognition algorithms – which have repeatedly been demonstrated to be less accurate for people with darker skin – are just one example of how racial bias gets replicated within and perpetuated by emerging technologies. There's an urgency as AI is used to make really high-stakes decisions. The stakes are higher because new systems can replicate historical biases at scale. One of the fundamental questions of the work is: how to build AI models that deal with systemic inequality more effectively? Inequality is perpetuated by technology in many ways across many sectors.


Committee on AI says EU has 'fallen behind' in global tech leadership race

#artificialintelligence

The EU needs to act as a'global standard-setter' in AI, according to a new report that also warned about the risks of mass surveillance. A new EU report says public debate on the use of artificial intelligence (AI) should focus on the technology's "enormous potential" to complement humans. The European Parliament's special committee on artificial intelligence in a digital age adopted its final recommendations yesterday (22 March) after 18 months of inquiries. The committee's draft text notes that the world is on the verge of "the fourth industrial revolution" from an abundance of data combined with powerful algorithms. But it adds that that the EU has "fallen behind" in the global race for tech leadership, which poses a risk that tech standards could be developed in the future by "non-democratic actors".


Now that computers connect us all, for better and worse, what's next?

#artificialintelligence

This article was written, edited and designed on laptop computers. Such foldable, transportable devices would have astounded computer scientists just a few decades ago, and seemed like sheer magic before that. The machines contain billions of tiny computing elements, running millions of lines of software instructions, collectively written by countless people across the globe. You click or tap or type or speak, and the result seamlessly appears on the screen. Computers were once so large they filled rooms. Now they're everywhere and invisible, embedded in watches, car engines, cameras, televisions and toys. They manage electrical grids, analyze scientific data and predict the weather. The modern world would be impossible without them. Scientists aim to make computers faster and programs more intelligent, while deploying technology in an ethical manner. Their efforts build on more than a century of innovation. In 1833, English mathematician Charles Babbage conceived a programmable machine that presaged today's computing architecture, featuring a "store" for holding numbers, a "mill" for operating on them, an instruction reader and a printer. This Analytical Engine also had logical functions like branching (if X, then Y).