Goto

Collaborating Authors

 reduce bias


Randomization Can Reduce Both Bias and Variance: A Case Study in Random Forests

Liu, Brian, Mazumder, Rahul

arXiv.org Machine Learning

We study the often overlooked phenomenon, first noted in Breiman (2001), that random forests appear to reduce bias compared to bagging. Motivated by an interesting paper by Mentch and Zhou (2020), where the authors argue that random forests reduce effective degrees of freedom and only outperform bagging ensembles in low signal-to-noise ratio (SNR) settings, we explore how random forests can uncover patterns in the data missed by bagging. We empirically demonstrate that in the presence of such patterns, random forests reduce bias along with variance and increasingly outperform bagging ensembles when SNR is high. Our observations offer insights into the real-world success of random forests across a range of SNRs and enhance our understanding of the difference between random forests and bagging ensembles with respect to the randomization injected into each split. Our investigations also yield practical insights into the importance of tuning mtry in random forests.


How 4 Black Founders Fund Recipients Are Building With AI - Liwaiwai

#artificialintelligence

Startups are key to solving today's biggest challenges and a huge driver of innovation -- and artificial intelligence is one of their sharpest tools. Virtual assistants, customized content, traffic apps, spell check, mobile check deposit and live captioning constitute just a small fraction of the everyday solutions using AI -- and many of these technologies were first developed by startups. AI learns from those who build it, so it is critical to have people of all backgrounds helping shape the technology to ensure its effectiveness, reduce bias and create better solutions for everyone. As Director of Product Inclusion and Equity at Google, I love to see Black founders tap into the power of our Google AI tech to help their communities and transform the way our products work and operate. In honor of Black History Month in the U.S., I asked four Google for Startups Black Founders Fund recipients from around the world and across different industries how they're using Google AI technology to address societal challenges.


Magnit BrandVoice: 3 Key Ways Artificial Intelligence Helps Organizations Drive DE&I

#artificialintelligence

Debates about the efficacy and ethics of using artificial intelligence (AI) --or not using it--when making decisions about hiring, promotions, and diversity, equity and inclusion (DE&I) have been going on for years. And while the AI Bill of Rights has recognized and taken steps to protect vulnerable populations from unfair practices, new emerging AI technologies have led to a turning point of increasing adoption in the business landscape. But how can organizations ensure they are reaping the benefits of using AI to dramatically reduce bias and drive DE&I, while avoiding any potential pitfalls? When developed and used with intention, AI-powered technologies can be transformational tools for increasing diversity within businesses. Let's take a look at how leading organizations can leverage cutting-edge AI technology to augment human expertise and help organizations achieve their DE&I initiatives.


AI-Powered Hiring Tools Have Failed To Reduce Bias, New Study Claims - AI Summary

#artificialintelligence

Users of such tools claim that it eliminates gender and ethnic biases in hiring by utilizing algorithms that analyze job applicants through their speech patterns, expressions, and other aspects. However, researchers from Cambridge's Centre for Gender Studies contend that AI recruiting tools are superficial and equivalent to "automated pseudoscience" in a recent report published in Philosophy and Technology. The Cambridge team asserts that because AI is programmed to look for the employer's ideal applicant, it may eventually encourage uniformity rather than variety in the workforce when it is utilized to reduce candidate pools. "By claiming that racism, sexism, and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world," co-author Dr. Eleanor Drage said in a statement. The researchers noted many businesses now analyze candidate videos using AI, evaluating applicants for the "big five" personality traits: extroversion, openness, agreeableness, conscientiousness, and neuroticism.


AI-Powered Hiring Tools Have Failed to Reduce Bias, New Study Claims

#artificialintelligence

In recent years, there has been an increase in the usage of AI tools that are advertised as a solution to the lack of diversity in the workforce. These tools range from chatbots and CV scrapers to aid companies in hiring employees. Users of such tools claim that it eliminates gender and ethnic biases in hiring by utilizing algorithms that analyze job applicants through their speech patterns, expressions, and other aspects. However, researchers from Cambridge's Centre for Gender Studies contend that AI recruiting tools are superficial and equivalent to "automated pseudoscience" in a recent report published in Philosophy and Technology. They claim it is a risky instance of "technosolutionism" - the use of technology to address complex issues like discrimination without making the necessary investments or alterations to organizational culture.


Algorithmic bias in AI

#artificialintelligence

Algorithmic bias in AI is also defined as machine learning bias, where an algorithm performs systematically and make assumptions in the machine learning process. Bias comes up with different factors which does not contain little design of the algorithm and it under designs by planning with the collected data. It helps in training the model by the bias algorithm. Considering the real-world example, we find usage of algorithmic bias in various places like social media platforms and in the search engine. Sometimes even we face difficult problems with the algorithmic bias in case of its sequence and its performance by making various wrong outcomes.


3 steps businesses can take to reduce bias in AI systems

#artificialintelligence

How to develop an ethical and non-biased AI application in an undoubtedly biased and unbalanced society? Can AI be the holy grail by developing more balanced societies that overcome traditional inequality and exclusion? It is too early to say, and it seems apparent that we will witness many trial-and-error phases before achieving a consensus on what and how AI might be used ethically in our societies. Much like institutional racism, which requires fundamental shifts in the overall ecosystem, the problems in AI development also call for a similar change to create better output. Behind the development and implementation of algorithms, there are developers and specific people in power positions.


3 steps businesses can take to reduce bias in AI systems

#artificialintelligence

"Okay, Google, what's the weather today?" "Sorry, I don't understand." Does the experience--interacting with smart machines that don't respond to orders--sound familiar? This failure may leave people feeling dumbfounded, as if their intelligence were not on the same wavelength as the machines'. While this is not the intention of AI development (to interact selectively), such incidents are likely more frequent for "minorities" in the tech world. The global artificial intelligence (AI) software market is forecast to boom in the coming years, reaching around 126 billion US dollars by 2025.


Fairness via AI: Bias Reduction in Medical Information

Dori-Hacohen, Shiri, Montenegro, Roberto, Murai, Fabricio, Hale, Scott A., Sung, Keen, Blain, Michela, Edwards-Johnson, Jennifer

arXiv.org Artificial Intelligence

Most Fairness in AI research focuses on exposing biases in AI systems. A broader lens on fairness reveals that AI can serve a greater aspiration: rooting out societal inequities from their source. Specifically, we focus on inequities in health information, and aim to reduce bias in that domain using AI. The AI algorithms under the hood of search engines and social media, many of which are based on recommender systems, have an outsized impact on the quality of medical and health information online. Therefore, embedding bias detection and reduction into these recommender systems serving up medical and health content online could have an outsized positive impact on patient outcomes and wellbeing. In this position paper, we offer the following contributions: (1) we propose a novel framework of Fairness via AI, inspired by insights from medical education, sociology and antiracism; (2) we define a new term, bisinformation, which is related to, but distinct from, misinformation, and encourage researchers to study it; (3) we propose using AI to study, detect and mitigate biased, harmful, and/or false health information that disproportionately hurts minority groups in society; and (4) we suggest several pillars and pose several open problems in order to seed inquiry in this new space. While part (3) of this work specifically focuses on the health domain, the fundamental computer science advances and contributions stemming from research efforts in bias reduction and Fairness via AI have broad implications in all areas of society.


Accounting For Diversity In Automated Gender Recognition Systems

#artificialintelligence

Developments in AI entail incredible progress for many fields in the near future. AI is increasingly influencing the opinions and behaviour of people in everyday life, and it is no longer merely confined to industry but can be found in many other fields, such as healthcare, education and retail environments. Nevertheless, the introduction and implementation of AI in society raises a variety of ethical, legal and societal concerns, and within this context there are still many areas in which there is substantial room for improvement. A more specific practical example of such room for improvement can be found in the fact that algorithms, among which automated gender recognition systems, do not always account for diversity, and this may have a detrimental impact on the lives of individuals. From a legal perspective, a question that has often been raised within this context is how we can best account for diversity in such AI systems?