Well File:

Results


A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied

#artificialintelligence

Artificial Intelligence and its inherent bias seems to be an ongoing contributing factor in slowing minorities home loan approvals. An investigation by The Markup found lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants are more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants are likely to be denied. How detrimental is the secret bias hidden in mortgage algorithms? It's important to note that 45% of the country's largest mortgage lenders now offer online or app-based loan origination, as FinTech looks to play a major role in reducing bias in the home lending market, CultureBanx reported.


AI Can Take on Bias in Lending

#artificialintelligence

Humans invented artificial intelligence, so it is an unfortunate reality that human biases can be baked into AI. Businesses that use AI, however, do not need to replicate these historical mistakes. Today, we can deploy and scale carefully designed AI across organizations to root out bias rather than reinforce it. This shift is happening now in consumer lending, an industry with a history of using biased systems and processes to write loans. For years, creditors have used models that misrepresent the creditworthiness of women and minorities with discriminatory credit-scoring systems and other practices. Until recently, for example, consistently paying rent did not help on mortgage applications, an exclusion that especially disadvantaged people of color.


Unpacking the Black Box: Regulating Algorithmic Decisions

arXiv.org Machine Learning

We characterize optimal oversight of algorithms in a world where an agent designs a complex prediction function but a principal is limited in the amount of information she can learn about the prediction function. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than sources of mis-prediction, which matter for welfare-relevant outcomes. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending.


A.I. Bias Caused 80% Of Black Mortgage Applicants To Be Denied

#artificialintelligence

Artificial Intelligence and its inherent bias seems to be an ongoing contributing factor in slowing minorities home loan approvals. An investigation by The Markup found lenders were more likely to deny home loans to people of color than to white people with similar financial characteristics. Specifically, 80% of Black applicants are more likely to be rejected, along with 40% of Latino applicants, and 70% of Native American applicants are likely to be denied. How detrimental is the secret bias hidden in mortgage algorithms? It's important to note that 45% of the country's largest mortgage lenders now offer online or app-based loan origination, as FinTech looks to play a major role in reducing bias in the home lending market, CultureBanx reported.


FICO scores leave out 'people on the margins,' Upstart's CEO says. Can AI make lending more inclusive -- without creating bias of its own?

#artificialintelligence

Dave Girouard, the chief executive of the AI lending platform Upstart Holdings Inc. UPST, -2.51% in Silicon Valley, understood the worry. "The concern that the use of AI in credit decisioning could replicate or even amplify human bias is well-founded," he said in his testimony at the hearing. But Girouard, who co-founded Upstart in 2012, also said he had created the San Mateo, Calif.-based company to broaden access to affordable credit through "modern technology and data science." And he took aim at the shortcomings he sees in traditional credit scoring. The FICO score, introduced in 1989, has become "the default way banks judge a loan applicant," Girouard said in his testimony.


Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts

arXiv.org Artificial Intelligence

Governments are increasingly turning to algorithmic risk assessments when making important decisions, believing that these algorithms will improve public servants' ability to make policy-relevant predictions and thereby lead to more informed decisions. Yet because many policy decisions require balancing risk-minimization with competing social goals, evaluating the impacts of risk assessments requires considering how public servants are influenced by risk assessments when making policy decisions rather than just how accurately these algorithms make predictions. Through an online experiment with 2,140 lay participants simulating two high-stakes government contexts, we provide the first large-scale evidence that risk assessments can systematically alter decision-making processes by increasing the salience of risk as a factor in decisions and that these shifts could exacerbate racial disparities. These results demonstrate that improving human prediction accuracy with algorithms does not necessarily improve human decisions and highlight the need to experimentally test how government algorithms are used by human decision-makers.


A.I. Could Be The New Play To Increase Minority Homeownership

#artificialintelligence

Artificial Intelligence and its inherent bias may not be as judgmental as previously thought, at least in the case of home loans. It appears the use of algorithms for online mortgage lending can reduce discrimination against certain groups, including minorities, according to a recent study from the National Bureau of Economic Research. This could end up becoming the main tool in closing the racial wealth gap, especially as banks start using AI for lending decisions. The Breakdown You Need to Know: The study found that in person mortgage lenders typically reject minority applicants at a rate 6% higher than those with comparable economic backgrounds. However, when the application was online and involved an algorithm to make the decision, the acceptance and rejection rates were the same.


Assessing Algorithmic Fairness with Unobserved Protected Class Using Data Combination

arXiv.org Machine Learning

The increasing impact of algorithmic decisions on people's lives compels us to scrutinize their fairness and, in particular, the disparate impacts that ostensibly-color-blind algorithms can have on different groups. Examples include credit decisioning, hiring, advertising, criminal justice, personalized medicine, and targeted policymaking, where in some cases legislative or regulatory frameworks for fairness exist and define specific protected classes. In this paper we study a fundamental challenge to assessing disparate impacts in practice: protected class membership is often not observed in the data. This is particularly a problem in lending and healthcare. We consider the use of an auxiliary dataset, such as the US census, that includes class labels but not decisions or outcomes. We show that a variety of common disparity measures are generally unidentifiable aside for some unrealistic cases, providing a new perspective on the documented biases of popular proxy-based methods. We provide exact characterizations of the sharpest-possible partial identification set of disparities either under no assumptions or when we incorporate mild smoothness constraints. We further provide optimization-based algorithms for computing and visualizing these sets, which enables reliable and robust assessments -- an important tool when disparity assessment can have far-reaching policy implications. We demonstrate this in two case studies with real data: mortgage lending and personalized medicine dosing.


Infusing domain knowledge in AI-based "black box" models for better explainability with application in bankruptcy prediction

arXiv.org Artificial Intelligence

Although "black box" models such as Artificial Neural Networks, Support Vector Machines, and Ensemble Approaches continue to show superior performance in many disciplines, their adoption in the sensitive disciplines (e.g., finance, healthcare) is questionable due to the lack of interpretability and explainability of the model. In fact, future adoption of "black box" models is difficult because of the recent rule of "right of explanation" by the European Union where a user can ask for an explanation behind an algorithmic decision, and the newly proposed bill by the US government, the "Algorithmic Accountability Act", which would require companies to assess their machine learning systems for bias and discrimination and take corrective measures. Top Bankruptcy Prediction Models are A.I.-based and are in need of better explainability -the extent to which the internal working mechanisms of an AI system can be explained in human terms. Although explainable artificial intelligence is an emerging field of research, infusing domain knowledge for better explainability might be a possible solution. In this work, we demonstrate a way to collect and infuse domain knowledge into a "black box" model for bankruptcy prediction. Our understanding from the experiments reveals that infused domain knowledge makes the output from the black box model more interpretable and explainable.


Fintech: A Change in the Mortgage Ecosystem

#artificialintelligence

A new study or survey is released almost daily that suggests that artificial intelligence (AI) and machine learning (ML) will revolutionize our lives. This past summer, the Treasury Department released a report in which the agency recommended facilitating the development of AI due to the potential it holds for financial services companies and the overall economy. The agency also found that AI was one of the three biggest areas of investment for financial services companies last year. However, it's not just the Treasury Department that is backing AI and machine learning. The Federal Reserve has recognized the two concepts, as has the Financial Industry Regulatory Authority (FINRA), which noted that AI could help banks prevent money laundering and improve data management and customer service.