Results


The case for placing AI at the heart of digitally robust financial regulation

#artificialintelligence

"Data is the new oil." Originally coined in 2006 by the British mathematician Clive Humby, this phrase is arguably more apt today than it was then, as smartphones rival automobiles for relevance and the technology giants know more about us than we would like to admit. Just as it does for the financial services industry, the hyper-digitization of the economy presents both opportunity and potential peril for financial regulators. On the upside, reams of information are newly within their reach, filled with signals about financial system risks that regulators spend their days trying to understand. The explosion of data sheds light on global money movement, economic trends, customer onboarding decisions, quality of loan underwriting, noncompliance with regulations, financial institutions' efforts to reach the underserved, and much more. Importantly, it also contains the answers to regulators' questions about the risks of new technology itself. Digitization of finance generates novel kinds of hazards and accelerates their development. Problems can flare up between scheduled regulatory examinations and can accumulate imperceptibly beneath the surface of information reflected in traditional reports. Thanks to digitization, regulators today have a chance to gather and analyze much more data and to see much of it in something close to real time. The potential for peril arises from the concern that the regulators' current technology framework lacks the capacity to synthesize the data. The irony is that this flood of information is too much for them to handle.


A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied

#artificialintelligence

Artificial Intelligence and its inherent bias seems to be an ongoing contributing factor in slowing minorities home loan approvals. An investigation by The Markup found lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants are more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants are likely to be denied. How detrimental is the secret bias hidden in mortgage algorithms? It's important to note that 45% of the country's largest mortgage lenders now offer online or app-based loan origination, as FinTech looks to play a major role in reducing bias in the home lending market, CultureBanx reported.


AI Can Take on Bias in Lending

#artificialintelligence

Humans invented artificial intelligence, so it is an unfortunate reality that human biases can be baked into AI. Businesses that use AI, however, do not need to replicate these historical mistakes. Today, we can deploy and scale carefully designed AI across organizations to root out bias rather than reinforce it. This shift is happening now in consumer lending, an industry with a history of using biased systems and processes to write loans. For years, creditors have used models that misrepresent the creditworthiness of women and minorities with discriminatory credit-scoring systems and other practices. Until recently, for example, consistently paying rent did not help on mortgage applications, an exclusion that especially disadvantaged people of color.


Unpacking the Black Box: Regulating Algorithmic Decisions

arXiv.org Machine Learning

We characterize optimal oversight of algorithms in a world where an agent designs a complex prediction function but a principal is limited in the amount of information she can learn about the prediction function. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than sources of mis-prediction, which matter for welfare-relevant outcomes. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending.


A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied

#artificialintelligence

Artificial Intelligence and its inherent bias seems to be an ongoing contributing factor in slowing minorities home loan approvals. An investigation by The Markup found lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants are more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants are likely to be denied. How detrimental is the secret bias hidden in mortgage algorithms? It's important to note that 45% of the country's largest mortgage lenders now offer online or app-based loan origination, as FinTech looks to play a major role in reducing bias in the home lending market, CultureBanx reported.


AI Weekly: Algorithmic discrimination highlights the need for regulation

#artificialintelligence

The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. This week, a piece from The Makeup uncovered biases in U.S. mortgage-approval algorithms that lead lenders to turn down people of color more often than white applicants. A decisioning model called Classic FICO didn't consider everyday payments -- like on-time rent and utility checks, among others -- and instead rewarded traditional credit, to which Black, Native American, Asian, and Latino Americans have less access than white Americans. The findings aren't revelatory: back in 2018, researchers at the University of California, Berkeley found that mortgage lenders charge higher interest rates to these borrowers compared to white borrowers with comparable credit scores. But they do point to the challenges in regulating companies that riskily embrace AI for decision-making, particularly in industries with the potential to inflict real-world harms.


SEC Eyes Rules for Financial Firms' Digital Engagement Practices: Reuters

#artificialintelligence

The U.S. Securities and Exchange Commission (SEC) will seek input on whether digital customer engagement innovations used by financial firms should be governed by existing rules or may need new ones, commission chair Gary Gensler told Reuters. While the SEC's thinking on the subject is at an "early stage," its rules may need updating to account for an artificial intelligence-led revolution in predictive analytics, differential marketing and behavioral prompts designed to optimize customer engagement, he said. The SEC plans to launch a sweeping consultation in coming days that could have major ramifications for retail brokers, wealth managers and robo-advisers, which increasingly use such tools to drive customers to higher-revenue products. I really believe data analytics and AI can bring a lot of positives, but it means we should look back and think about what does this mean for user interface, user engagement, fairness and bias," said Gensler. "What does it mean about rules written ...


FICO scores leave out 'people on the margins,' Upstart's CEO says. Can AI make lending more inclusive -- without creating bias of its own?

#artificialintelligence

Dave Girouard, the chief executive of the AI lending platform Upstart Holdings Inc. UPST, -2.51% in Silicon Valley, understood the worry. "The concern that the use of AI in credit decisioning could replicate or even amplify human bias is well-founded," he said in his testimony at the hearing. But Girouard, who co-founded Upstart in 2012, also said he had created the San Mateo, Calif.-based company to broaden access to affordable credit through "modern technology and data science." And he took aim at the shortcomings he sees in traditional credit scoring. The FICO score, introduced in 1989, has become "the default way banks judge a loan applicant," Girouard said in his testimony.


Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts

arXiv.org Artificial Intelligence

Governments are increasingly turning to algorithmic risk assessments when making important decisions, believing that these algorithms will improve public servants' ability to make policy-relevant predictions and thereby lead to more informed decisions. Yet because many policy decisions require balancing risk-minimization with competing social goals, evaluating the impacts of risk assessments requires considering how public servants are influenced by risk assessments when making policy decisions rather than just how accurately these algorithms make predictions. Through an online experiment with 2,140 lay participants simulating two high-stakes government contexts, we provide the first large-scale evidence that risk assessments can systematically alter decision-making processes by increasing the salience of risk as a factor in decisions and that these shifts could exacerbate racial disparities. These results demonstrate that improving human prediction accuracy with algorithms does not necessarily improve human decisions and highlight the need to experimentally test how government algorithms are used by human decision-makers.


A.I. Could Be The New Play To Increase Minority Homeownership

#artificialintelligence

Artificial Intelligence and its inherent bias may not be as judgmental as previously thought, at least in the case of home loans. It appears the use of algorithms for online mortgage lending can reduce discrimination against certain groups, including minorities, according to a recent study from the National Bureau of Economic Research. This could end up becoming the main tool in closing the racial wealth gap, especially as banks start using AI for lending decisions. The Breakdown You Need to Know: The study found that in person mortgage lenders typically reject minority applicants at a rate 6% higher than those with comparable economic backgrounds. However, when the application was online and involved an algorithm to make the decision, the acceptance and rejection rates were the same.