Goto

Collaborating Authors

Results


The case for placing AI at the heart of digitally robust financial regulation

#artificialintelligence

"Data is the new oil." Originally coined in 2006 by the British mathematician Clive Humby, this phrase is arguably more apt today than it was then, as smartphones rival automobiles for relevance and the technology giants know more about us than we would like to admit. Just as it does for the financial services industry, the hyper-digitization of the economy presents both opportunity and potential peril for financial regulators. On the upside, reams of information are newly within their reach, filled with signals about financial system risks that regulators spend their days trying to understand. The explosion of data sheds light on global money movement, economic trends, customer onboarding decisions, quality of loan underwriting, noncompliance with regulations, financial institutions' efforts to reach the underserved, and much more. Importantly, it also contains the answers to regulators' questions about the risks of new technology itself. Digitization of finance generates novel kinds of hazards and accelerates their development. Problems can flare up between scheduled regulatory examinations and can accumulate imperceptibly beneath the surface of information reflected in traditional reports. Thanks to digitization, regulators today have a chance to gather and analyze much more data and to see much of it in something close to real time. The potential for peril arises from the concern that the regulators' current technology framework lacks the capacity to synthesize the data. The irony is that this flood of information is too much for them to handle.


Unpacking the Black Box: Regulating Algorithmic Decisions

arXiv.org Machine Learning

We characterize optimal oversight of algorithms in a world where an agent designs a complex prediction function but a principal is limited in the amount of information she can learn about the prediction function. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than sources of mis-prediction, which matter for welfare-relevant outcomes. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending.


AI Weekly: Algorithmic discrimination highlights the need for regulation

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. This week, a piece from The Makeup uncovered biases in U.S. mortgage-approval algorithms that lead lenders to turn down people of color more often than white applicants. A decisioning model called Classic FICO didn't consider everyday payments -- like on-time rent and utility checks, among others -- and instead rewarded traditional credit, to which Black, Native American, Asian, and Latino Americans have less access than white Americans. The findings aren't revelatory: back in 2018, researchers at the University of California, Berkeley found that mortgage lenders charge higher interest rates to these borrowers compared to white borrowers with comparable credit scores. But they do point to the challenges in regulating companies that riskily embrace AI for decision-making, particularly in industries with the potential to inflict real-world harms.


FICO scores leave out 'people on the margins,' Upstart's CEO says. Can AI make lending more inclusive -- without creating bias of its own?

#artificialintelligence

Dave Girouard, the chief executive of the AI lending platform Upstart Holdings Inc. UPST, -2.51% in Silicon Valley, understood the worry. "The concern that the use of AI in credit decisioning could replicate or even amplify human bias is well-founded," he said in his testimony at the hearing. But Girouard, who co-founded Upstart in 2012, also said he had created the San Mateo, Calif.-based company to broaden access to affordable credit through "modern technology and data science." And he took aim at the shortcomings he sees in traditional credit scoring. The FICO score, introduced in 1989, has become "the default way banks judge a loan applicant," Girouard said in his testimony.


Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

arXiv.org Machine Learning

The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly-color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policymaking, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental challenge to assessing disparate impacts in practice: protected class membership is often not observed in the data. This is particularly a problem in lending and healthcare. We consider the use of an auxiliary dataset, such as the US census, that includes class labels but not decisions or outcomes. We show that a variety of common disparity measures are generally unidentifiable aside for some unrealistic cases, providing a new perspective on the documented biases of popular proxy-based methods. We provide exact characterizations of the sharpest-possible partial identification set of disparities either under no assumptions or when we incorporate mild smoothness constraints. We further provide optimization-based algorithms for computing and visualizing these sets, which enables reliable and robust assessments -- an important tool when disparity assessment can have far-reaching policy implications. We demonstrate this in two case studies with real data: mortgage lending and personalized medicine dosing.


Fintech: A Change in the Mortgage Ecosystem

#artificialintelligence

A new study or survey is released almost daily that suggests that artificial intelligence (AI) and machine learning (ML) will revolutionize our lives. This past summer, the Treasury Department released a report in which the agency recommended facilitating the development of AI due to the potential it holds for financial services companies and the overall economy. The agency also found that AI was one of the three biggest areas of investment for financial services companies last year. However, it's not just the Treasury Department that is backing AI and machine learning. The Federal Reserve has recognized the two concepts, as has the Financial Industry Regulatory Authority (FINRA), which noted that AI could help banks prevent money laundering and improve data management and customer service.


Big data means small margins in the mortgage industry of the future

#artificialintelligence

The big story in mortgages today is the rise in mortgage loan rates. For the first time in years, we're seeing 30-year fixed mortgage rates consistently above 4%, and a 5% rate is in sight. Higher rates make sense if you look at it one way: the economy is strong, inflation is climbing, and it's safe to expect Federal Reserve hikes in 2018 and 2019. Industry veterans might be sighing with relief.