The Download: political AI models, and a wrongful arrest
How they did it: The team asked language models where they stand on various topics, such as feminism and democracy. They used the answers to plot them on a political compass, then tested whether retraining models on even more politically biased training data changed their behavior and ability to detect hate speech and misinformation (it did). Why it matters: As AI language models are rolled out into products and services used by millions, understanding their underlying political assumptions could not be more important. That's because they have the potential to cause real harm. A chatbot offering health-care advice might refuse to offer advice on abortion or contraception, for example.
Aug-8-2023, 12:08:00 GMT
- Country:
- North America > United States
- California > San Francisco County
- San Francisco (0.06)
- Texas (0.06)
- California > San Francisco County
- North America > United States
- Industry:
- Health & Medicine (0.57)
- Information Technology (0.76)
- Transportation > Ground
- Road (0.33)
- Technology: