Goto

Collaborating Authors

Results


AI Can Take on Bias in Lending

#artificialintelligence

Humans invented artificial intelligence, so it is an unfortunate reality that human biases can be baked into AI. Businesses that use AI, however, do not need to replicate these historical mistakes. Today, we can deploy and scale carefully designed AI across organizations to root out bias rather than reinforce it. This shift is happening now in consumer lending, an industry with a history of using biased systems and processes to write loans. For years, creditors have used models that misrepresent the creditworthiness of women and minorities with discriminatory credit-scoring systems and other practices. Until recently, for example, consistently paying rent did not help on mortgage applications, an exclusion that especially disadvantaged people of color.


Unpacking the Black Box: Regulating Algorithmic Decisions

arXiv.org Machine Learning

We characterize optimal oversight of algorithms in a world where an agent designs a complex prediction function but a principal is limited in the amount of information she can learn about the prediction function. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than sources of mis-prediction, which matter for welfare-relevant outcomes. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending.


A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied

#artificialintelligence

Artificial Intelligence and its inherent bias seems to be an ongoing contributing factor in slowing minorities home loan approvals. An investigation by The Markup found lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants are more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants are likely to be denied. How detrimental is the secret bias hidden in mortgage algorithms? It's important to note that 45% of the country's largest mortgage lenders now offer online or app-based loan origination, as FinTech looks to play a major role in reducing bias in the home lending market, CultureBanx reported.


FICO scores leave out 'people on the margins,' Upstart's CEO says. Can AI make lending more inclusive -- without creating bias of its own?

#artificialintelligence

Dave Girouard, the chief executive of the AI lending platform Upstart Holdings Inc. UPST, -2.51% in Silicon Valley, understood the worry. "The concern that the use of AI in credit decisioning could replicate or even amplify human bias is well-founded," he said in his testimony at the hearing. But Girouard, who co-founded Upstart in 2012, also said he had created the San Mateo, Calif.-based company to broaden access to affordable credit through "modern technology and data science." And he took aim at the shortcomings he sees in traditional credit scoring. The FICO score, introduced in 1989, has become "the default way banks judge a loan applicant," Girouard said in his testimony.


Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts

arXiv.org Artificial Intelligence

Governments are increasingly turning to algorithmic risk assessments when making important decisions, believing that these algorithms will improve public servants' ability to make policy-relevant predictions and thereby lead to more informed decisions. Yet because many policy decisions require balancing risk-minimization with competing social goals, evaluating the impacts of risk assessments requires considering how public servants are influenced by risk assessments when making policy decisions rather than just how accurately these algorithms make predictions. Through an online experiment with 2,140 lay participants simulating two high-stakes government contexts, we provide the first large-scale evidence that risk assessments can systematically alter decision-making processes by increasing the salience of risk as a factor in decisions and that these shifts could exacerbate racial disparities. These results demonstrate that improving human prediction accuracy with algorithms does not necessarily improve human decisions and highlight the need to experimentally test how government algorithms are used by human decision-makers.


Reducing bias in AI-based financial services

#artificialintelligence

Artificial intelligence (AI) presents an opportunity to transform how we allocate credit and risk, and to create fairer, more inclusive systems. AI's ability to avoid the traditional credit reporting and scoring system that helps perpetuate existing bias makes it a rare, if not unique, opportunity to alter the status quo. However, AI can easily go in the other direction to exacerbate existing bias, creating cycles that reinforce biased credit allocation while making discrimination in lending even harder to find. Will we unlock the positive, worsen the negative, or maintain the status quo by embracing new technology? This paper proposes a framework to evaluate the impact of AI in consumer lending. The goal is to incorporate new data and harness AI to expand credit to consumers who need it on better terms than are currently provided. It builds on our existing system's dual goals of pricing financial services based on the true risk the individual consumer poses while aiming to prevent discrimination (e.g., race, gender, DNA, marital status, etc.).


Housing Market Prediction Problem using Different Machine Learning Algorithms: A Case Study

arXiv.org Machine Learning

Developing an accurate prediction model for housing prices is always needed for socioeconomic development and wellbeing of citizens. In this paper, a diverse set of machine learning algorithms such as XGBoost, CatBoost, Random Forest, Lasso, Voting Regressor, and others, are being employed to predict the housing prices using public available datasets. The housing datasets of 62,723 records from January 2015 to November 2019 is obtained from the Florida's Volusia County Property Appraiser website. The records are publicly available and include the real estate/economic database, maps, and other associated information. The database is usually updated weekly according to the State of Florida regulations. Then, the housing price prediction models using machine learning techniques are developed and their regression model performances are compared. Finally, an improved housing price prediction model for assisting the housing market is proposed. Particularly, a house seller/buyer or a real estate broker can get insight in making better-informed decisions considering the housing price prediction. Keywords: Housing Price Prediction, Machine Learning Algorithms, XGBoost Method, Target Binning. 1) Introduction Starting with 2005, the increasing interest rates in the U.S. housing market have slowed down the market considerably. Particularly, the investment bank Lehman Brothers Holdings was affected significantly, and forced into bankruptcy in 2008. This resulted in a sharp decline in the housing prices and, combined with the subprime mortgage crisis, increased the slowing down of the economy and weakened the asset values, which ultimately led to the depreciation of the global housing market and caused a global crisis (Park & Kwon Bae, 2015). Consequently, economists turned their attention to predicting these types of threats that could jeopardize the economic stability.


Google rolls out new AI-powered product for banks processing PPP loans

ZDNet

Google on Friday announced the launch of the PPP Lending AI Solution, a new product designed to help financial institutions quickly process loan applications from small businesses seeking assistance during the COVID-19 pandemic. The new tool comprises three different components to help banks track loan applications, extract data from applications for quick processing, and analyze historical loan data. From cancelled conferences to disrupted supply chains, not a corner of the global economy is immune to the spread of COVID-19. The US Small Business Administration's Paycheck Protection Program (PPP) gives loans to small businesses so that they can keep workers on payroll through the pandemic. The number of applications coming in has overwhelmed the lending institutions approved to make PPP loans.


A.I. Could Be The New Play To Increase Minority Homeownership

#artificialintelligence

Artificial Intelligence and its inherent bias may not be as judgmental as previously thought, at least in the case of home loans. It appears the use of algorithms for online mortgage lending can reduce discrimination against certain groups, including minorities, according to a recent study from the National Bureau of Economic Research. This could end up becoming the main tool in closing the racial wealth gap, especially as banks start using AI for lending decisions. The Breakdown You Need to Know: The study found that in person mortgage lenders typically reject minority applicants at a rate 6% higher than those with comparable economic backgrounds. However, when the application was online and involved an algorithm to make the decision, the acceptance and rejection rates were the same.


Dynamic Modeling and Equilibria in Fair Decision Making

arXiv.org Machine Learning

Recent studies on fairness in automated decision making systems have both investigated the potential future impact of these decisions on the population at large, and emphasized that imposing ''typical'' fairness constraints such as demographic parity or equality of opportunity does not guarantee a benefit to disadvantaged groups. However, these previous studies have focused on either simple one-step cost/benefit criteria, or on discrete underlying state spaces. In this work, we first propose a natural continuous representation of population state, governed by the Beta distribution, using a loan granting setting as a running example. Next, we apply a model of population dynamics under lending decisions, and show that when conditional payback probabilities are estimated correctly 1) ``optimal'' behavior by lenders can lead to ''Matthew Effect'' bifurcations (i.e., ''the rich get richer and the poor get poorer''), but that 2) many common fairness constraints on the allowable policies cause groups to converge to the same equilibrium point. Last, we contrast our results in the case of misspecified conditional probability estimates with prior work, and show that for this model, different levels of group misestimation guarantees that even fair policies lead to bifurcations. We illustrate some of the modeling conclusions on real data from credit scoring.