gender


How AI Can Help Shed Bias Baggage and Where to Start in HR

#artificialintelligence

It's hard to go to an HR conference these days and not hear about AI, diversity, or bias -- or all of the above! Some people are worried that AI will take their jobs. I say AI can help people do their jobs – and avoid introducing bias in the process. Right now, HR professionals are largely overwhelmed. Too many talent acquisition teams are inundated with tedious tasks – sifting through resumes, finding talent in an environment where unemployment is so low, etc. AI can help.


When governments turn to AI: Algorithms, trade-offs, and trust

#artificialintelligence

The notion reflects an interest in bias-free decision making or, when protected classes of individuals are involved, in avoiding disparate impact to legally protected classes.3


Is Facial Recognition Technology Racist? The Tech Connoisseur

#artificialintelligence

Recent studies demonstrate that machine learning algorithms can discriminate based on classes like race and gender. In this work, we present an approach to evaluate bias present in automated facial analysis algorithms and datasets with respect to phenotypic subgroups. Using the dermatologist approved Fitzpatrick Skin Type classification system, we characterize the gender and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. We find that these datasets are overwhelmingly composed of lighter-skinned subjects (79.6% for IJB-A and 86.2% for Adience) and introduce a new facial analysis dataset which is balanced by gender and skin type. We evaluate 3 commercial gender classification systems using our dataset and show that darker-skinned females are the most misclassified group (with error rates of up to 34.7%).


Computational algorithms that effectively reduce report defects in surgical pathology Ye JJ, Tan MR - J Pathol Inform

#artificialintelligence

Background: Pathology report defects refer to errors in the pathology reports, such as transcription/voice recognition errors and incorrect nondiagnostic information. Examples of the latter include incorrect gender, incorrect submitting physician, incorrect description of tissue blocks submitted, report formatting issues, and so on. Over the past 5 years, we have implemented computational algorithms to identify and correct these report defects. Materials and Methods: Report texts, tissue blocks submitted, and other relevant information are retrieved from the pathology information system database. Two complementary algorithms are used to identify the voice recognition errors by parsing the gross description texts to either (i) identify previously encountered error patterns or (ii) flag sentences containing previously-unused two-word sequences (bigrams).


This online game wants to teach the public about AI bias

#artificialintelligence

Artificial intelligence might be coming for your next job, just not in the way you feared. The past few years have seen any number of articles that warn about a future where AI and automation drive humans into mass unemployment. To a considerable extent, those threats are overblown and distant. But a more imminent threat to jobs is that of algorithmic bias, the effect of machine learning models making decisions based on the wrong patterns in their training examples. A online game developed by computer science students at New York University aims to educate the public about the effects of AI bias in hiring.


Women Leaders in Technology Event - June 25

#artificialintelligence

You'll walk away with more knowledge and new connections gained over a good meal! Artificial Intelligence (AI) is applied to many areas of human life – across major verticals such as education, health care, and government– influencing significant decisions that will impact us. At its best, AI can be used to augment human judgment and reduce both conscious and unconscious biases. However, training data, algorithms, and other design choices that shape AI systems may reflect and amplify existing cultural prejudices and inequalities. Sarah Myers West is a postdoctoral researcher at the AI Now Institute and an affiliate researcher at the Berkman-Klein Center for Internet and Society.


Adversarial Task-Specific Privacy Preservation under Attribute Attack

arXiv.org Machine Learning

With the prevalence of machine learning services, crowdsourced data containing sensitive information poses substantial privacy challenges. Existing works focusing on protecting against membership inference attacks under the rigorous notion of differential privacy are susceptible to attribute inference attacks. In this paper, we develop a theoretical framework for task-specific privacy under the attack of attribute inference. Under our framework, we propose a minimax optimization formulation with a practical algorithm to protect a given attribute and preserve utility. We also extend our formulation so that multiple attributes could be simultaneously protected. Theoretically, we prove an information-theoretic lower bound to characterize the inherent tradeoff between utility and privacy when they are correlated. Empirically, we conduct experiments with real-world tasks that demonstrate the effectiveness of our method compared with state-of-the-art baseline approaches.


Detecting Bias with Generative Counterfactual Face Attribute Augmentation

arXiv.org Machine Learning

We introduce a simple framework for identifying biases of a smiling attribute classifier. Our method poses counterfactual questions of the form: how would the prediction change if this face characteristic had been different? We leverage recent advances in generative adversarial networks to build a realistic generative model of face images that affords controlled manipulation of specific image characteristics. We introduce a set of metrics that measure the effect of manipulating a specific property of an image on the output of a trained classifier. Empirically, we identify several different factors of variation that affect the predictions of a smiling classifier trained on CelebA.


Compositional Fairness Constraints for Graph Embeddings

arXiv.org Artificial Intelligence

Learning high-quality node embeddings is a key building block for machine learning models that operate on graph data, such as social networks and recommender systems. However, existing graph embedding techniques are unable to cope with fairness constraints, e.g., ensuring that the learned representations do not correlate with certain attributes, such as age or gender. Here, we introduce an adversarial framework to enforce fairness constraints on graph embeddings. Our approach is compositional---meaning that it can flexibly accommodate different combinations of fairness constraints during inference. For instance, in the context of social recommendations, our framework would allow one user to request that their recommendations are invariant to both their age and gender, while also allowing another user to request invariance to just their age. Experiments on standard knowledge graph and recommender system benchmarks highlight the utility of our proposed framework.


Everyone's talking about ethics in AI. Here's what they're missing

#artificialintelligence

Most of us do not have an equal voice or representation in this new world order. Leading the way instead are scientists and engineers who don't seem to understand how to represent how we live as individuals or in groups--the main ways we live, work, cooperate, and exist together--nor how to incorporate into their models our ethnic, cultural, gender, age, geographic or economic diversity, either. The result is that AI will benefit some of us far more than others, depending upon who we are, our gender and ethnic identities, how much income or power we have, where we are in the world, and what we want to do. The power structures that developed the world's complex civic and corporate systems were not initially concerned with diversity or equality, and as these systems migrate to becoming automated, untangling and teasing out the meaning for the rest of us becomes much more complicated. In the process, there is a risk that we will become further dependent on systems that don't represent us.