OpenAI says ChatGPT treats us all the same (most of the time)

MIT Technology Review 

Bias in AI is a huge problem. Ethicists have long studied the impact of bias when companies use AI models to screen résumés or loan applications, for example--instances of what the OpenAI researchers call third-person fairness. But the rise of chatbots, which enable individuals to interact with models directly, brings a new spin to the problem. "We wanted to study how it shows up in ChatGPT in particular," Alex Beutel, a researcher at OpenAI, told MIT Technology Review in an exclusive preview of results published today. Instead of screening a résumé you've already written, you might ask ChatGPT to write one for you, says Beutel: "If it knows my name, how does that affect the response?"