blattner
Trust But Verify: Peeking Inside the "Black Box" of Machine Learning
Artificial intelligence can be a powerful tool for analyzing massive amounts of data, finding connections and correlations that humans can't. However, unlike a person solving a math problem, many AI models can't easily explain the steps they took to reach their final answers. They are what's known in computer science as black boxes: You can see what goes in and what comes out; what happens in between is a mystery. The black-box problem is baked into many machine learning models, explains Laura Blattner, an assistant professor of finance at Stanford GSB. "The power of the technology is its ability to reflect the complexity in the world," she says.
Bias isn't the only problem with credit scores--and no, AI can't help
But in the biggest ever study of real-world mortgage data, economists Laura Blattner at Stanford University and Scott Nelson at the University of Chicago show that differences in mortgage approval between minority and majority groups is not just down to bias, but to the fact that minority and low-income groups have less data in their credit histories. This means that when this data is used to calculate a credit score and this credit score used to make a prediction on loan default, then that prediction will be less precise. It is this lack of precision that leads to inequality, not just bias. The implications are stark: fairer algorithms won't fix the problem. "It's a really striking result," says Ashesh Rambachan, who studies machine learning and economics at Harvard University, but was not involved in the study.
- Banking & Finance > Credit (1.00)
- Banking & Finance > Loans > Mortgages (0.54)