We can reduce gender bias in natural-language AI, but it will take a lot more work
Thanks to breakthroughs in natural language processing (NLP), machines can generate increasingly sophisticated representations of words. Every year, research groups release more and more powerful language models -- like the recently announced GPT-3, M2M 100, and MT-5 -- that are able to write complex essays or translate text into multiple languages with better accuracy than previous iterations. However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself. This summer, GPT-3 researchers discovered inherent biases within the model's results related to gender, race, and religion. Gender biases included the relationship between gender and occupation, as well as gendered descriptive words.
Dec-7-2020, 05:00:10 GMT
- Technology: