AI language models are rife with political biases
The researchers asked language models where they stand on various topics, such as feminism and democracy. They used the answers to plot them on a graph known as a political compass, and then tested whether retraining models on even more politically biased training data changed their behavior and ability to detect hate speech and misinformation (it did). The research is described in a peer-reviewed paper that won the best paper award at the Association for Computational Linguistics conference last month. As AI language models are rolled out into products and services used by millions of people, understanding their underlying political assumptions and biases could not be more important. That's because they have the potential to cause real harm.
Aug-7-2023, 16:52:59 GMT