Goto

Collaborating Authors

 remove bias


Rethink Model Re-Basin and the Linear Mode Connectivity

Qu, Xingyu, Horvath, Samuel

arXiv.org Artificial Intelligence

Recent studies suggest that with sufficiently wide models, most SGD solutions can, up to permutation, converge into the same basin. This phenomenon, known as the model re-basin regime, has significant implications for model averaging. However, current re-basin strategies are limited in effectiveness due to a lack of comprehensive understanding of underlying mechanisms. Addressing this gap, our work revisits standard practices and uncovers the frequent inadequacies of existing matching algorithms, which we show can be mitigated through proper re-normalization. By introducing a more direct analytical approach, we expose the interaction between matching algorithms and re-normalization processes. This perspective not only clarifies and refines previous findings but also facilitates novel insights. For instance, it connects the linear mode connectivity to pruning, motivating a lightweight yet effective post-pruning plug-in that can be directly merged with any existing pruning techniques. Our implementation is available at https://github.com/XingyuQu/rethink-re-basin.


FairGen: Fair Synthetic Data Generation

Chaudhari, Bhushan, Chaudhary, Himanshu, Agarwal, Aakash, Meena, Kamna, Bhowmik, Tanmoy

arXiv.org Artificial Intelligence

With the rising adoption of Machine Learning across the domains like banking, pharmaceutical, ed-tech, etc, it has become utmost important to adopt responsible AI methods to ensure models are not unfairly discriminating against any group. Given the lack of clean training data, generative adversarial techniques are preferred to generate synthetic data with several state-of-the-art architectures readily available across various domains from unstructured data such as text, images to structured datasets modelling fraud detection and many more. These techniques overcome several challenges such as class imbalance, limited training data, restricted access to data due to privacy issues. Existing work focusing on generating fair data either works for a certain GAN architecture or is very difficult to tune across the GANs. In this paper, we propose a pipeline to generate fairer synthetic data independent of the GAN architecture. The proposed paper utilizes a pre-processing algorithm to identify and remove bias inducing samples. In particular, we claim that while generating synthetic data most GANs amplify bias present in the training data but by removing these bias inducing samples, GANs essentially focuses more on real informative samples. Our experimental evaluation on two open-source datasets demonstrates how the proposed pipeline is generating fair data along with improved performance in some cases.


Public Support Requested to Remove Biases Based on Race in AI for Healthcare

#artificialintelligence

The general public is requested to help eradicate biases based on race and other underprivileged communities in artificial intelligence (AI) algorithms for healthcare. Health scientists are seeking support to resolve how'minoritized' communities, who are actively deprived because of social constructs, would not see future advantages from using AI in healthcare. The scientists, guided by the University of Birmingham and University Hospitals Birmingham, recently reported in Nature Medicine about the introduction of a consultation on a set of principles that they anticipate will cut biases that are said to be present in AI algorithms. There is increasing proof that certain AI algorithms do not work as well for specific groups of people - mainly those in minoritized racial/ethnic communities. A few of these come from biases in the datasets used to create AI algorithms.


How to Remove Bias in Machine Learning Training Data

#artificialintelligence

Much has changed in the AI/ML world but the concept of'garbage in; garbage out' remains stoic. Any algorithm is only as good as its training data. And, no training data is without bias, not even the ones generated through automation. In the past, many machine learning algorithms have been unfair to certain religions, races, genders, ethnicities, and economical statuses, among others. The Watson supercomputer from IBM that gave suggestions to doctors using a dataset of medical research papers was found to favor reputable studies only. Amazon's recruiting algorithm was found to favor men over women.


How to Remove Bias in Machine Learning Training Data

#artificialintelligence

Much has changed in the AI/ML world but the concept of'garbage in; garbage out' remains stoic. Any algorithm is only as good as its training data. And, no training data is without bias, not even the ones generated through automation. In the past, many machine learning algorithms have been unfair to certain religions, races, genders, ethnicities, and economical statuses, among others. The Watson supercomputer from IBM that gave suggestions to doctors using a dataset of medical research papers was found to favor reputable studies only.


How To Remove Bias From AI Models - AI Summary

#artificialintelligence

"Unfortunately, there's no way to quantify the size of this problem," said Brandon Purcell, a Forrester vice president, principal analyst, and co-author of the report, adding "… it's true that we are far from artificial general intelligence, but AI is being used to make critical decisions about people at scale today--from credit decisioning, to medical diagnoses, to criminal sentencing. These could include business leaders, lawyers, security and risk specialists, as well as activists, nonprofits, members of the community and consumers. Accounting for intersectionality or how different elements of a person's identity combine to compound the impacts of bias or privilege. "The key is in adopting best practices across the AI lifecycle from the very conception of the use case, through data understanding, modeling, evaluation, and into deployment and monitoring," Purcell said. "Unfortunately, there's no way to quantify the size of this problem," said Brandon Purcell, a Forrester vice president, principal analyst, and co-author of the report, adding "… it's true that we are far from artificial general intelligence, but AI is being used to make critical decisions about people at scale today--from credit decisioning, to medical diagnoses, to criminal sentencing.


How to remove bias from AI models

#artificialintelligence

As AI becomes more pervasive, AI-based discrimination is getting the attention of policymakers and corporate leaders but keeping it out of AI-models in the first place is harder than it sounds. According to a new Forrester report, Put the AI in "Fair" with the Right Approach to Fairness, most organizations adhere to fairness in principle but fail in practice. "Fairness" has multiple meanings: "To determine whether or not a machine learning model is fair, a company must decide how it will quantify and evaluate fairness," the report said. "Mathematically speaking, there are at least 21 different methods for measuring fairness." Sensitivity attributes are missing: "The essential paradox of fairness in AI is the fact that companies often don't capture protected attributes like race, sexual orientation, and veteran status in their data because they're not supposed to base decisions on them," the report said.


Can We Ever Remove Bias From Artificial Intelligence?

#artificialintelligence

The algorithms meant to make life easy could further divide us if we're not paying attention The current moment is asking us to evaluate almost every systemic issue in our society. While it might be uncomfortable, the only way through it is to look at the systems that control who wins and who loses and ask, "are they biased?" It's convenient to think that our work in technology is above these issues. A lot of folks in tech do this work to change the world or help communities in need. But just like the rest of society, we need to take this moment of self-reflection seriously, lest our failures are judged by the next generation as harshly as we're judging our elders now.


5 Ways Organizations can Remove Bias in Machine Learning Models

#artificialintelligence

Machine learning is frequently seen as the silver bullet for numerous industries' various issues. Machine learning advancements have appeared to all more rapidly and precisely read radiology checks, recognize high-risk patterns, and diminish supplier's administrative burden. As organizations venture up the utilization of ML-empowered frameworks in their everyday operations, they become progressively dependent on those frameworks to help them make crucial business decisions. Sometimes, the ML systems work independently, making it particularly significant that the automated decision-making works as intended. Human bias is surely an unavoidable truth in machine learning.


How AI can remove bias from decision-making

#artificialintelligence

The UK government recently published a review of algorithmic bias – an important and even crucial subject as ever more decision-making progresses from wetware to silicon. However, it would have been useful if they'd understood what Gary Becker told us all about discrimination itself – work for which he won the Nobel prize for economics. Almost all the things they are worrying about solve themselves within his logical structure. First though, a linguistic structure – let's examine the difference between algorithms and artificial intelligence (AI). An algo doesn't have to be encoded at all, it's a set of rules by which to make a decision – usually, almost always, derived from the current methods by which we make such decisions, just formalised or even coded.