How to Remove Gender Bias in Machine Learning Models: NLP and Word Embeddings
Most word embeddings used are glaringly sexist, let us look at some ways to de-bias such embeddings. Note - This article provides a review and the arguments made by Bolukbasi et al. in the paper "Man is to Computer Programmer as Woman is to Homemaker? All graphical drawings are made using draw.io. Word Embeddings are the core of NLP applications, and often, they end up being biased towards a gender due to the inherent stereotype present in the large text corpora they are trained on. Such models, when deployed to production can result in further widening of gender inequality and can have far fetched consequences on our society as a whole. To get a gist of what I'm talking about, here is a snippet from Bolukbasi et al., 2016 "Man is to Computer Programmer as Woman is to Homemaker?
Sep-12-2020, 08:51:01 GMT
- Technology: