creditworthiness
MASCA: LLM based-Multi Agents System for Credit Assessment
Jajoo, Gautam, Chitale, Pranjal A, Agarwal, Saksham
Recent advancements in financial problem-solving have leveraged LLMs and agent-based systems, with a primary focus on trading and financial modeling. However, credit assessment remains an underexplored challenge, traditionally dependent on rule-based methods and statistical models. In this paper, we introduce MASCA, an LLM-driven multi-agent system designed to enhance credit evaluation by mirroring real-world decision-making processes. The framework employs a layered architecture where specialized LLM-based agents collaboratively tackle sub-tasks. Additionally, we integrate contrastive learning for risk and reward assessment to optimize decision-making. We further present a signaling game theory perspective on hierarchical multi-agent systems, offering theoretical insights into their structure and interactions. Our paper also includes a detailed bias analysis in credit assessment, addressing fairness concerns. Experimental results demonstrate that MASCA outperforms baseline approaches, highlighting the effectiveness of hierarchical LLM-based multi-agent systems in financial applications, particularly in credit scoring.
- Asia > Middle East > UAE > Abu Dhabi Emirate > Abu Dhabi (0.14)
- Asia > Middle East > Jordan (0.04)
- North America > United States > New York > New York County > New York City (0.04)
- (3 more...)
- Banking & Finance > Trading (1.00)
- Banking & Finance > Credit (1.00)
Debiasing Alternative Data for Credit Underwriting Using Causal Inference
Alternative data provides valuable insights for lenders to evaluate a borrower's creditworthiness, which could help expand credit access to underserved groups and lower costs for borrowers. But some forms of alternative data have historically been excluded from credit underwriting because it could act as an illegal proxy for a protected class like race or gender, causing redlining. We propose a method for applying causal inference to a supervised machine learning model to debias alternative data so that it might be used for credit underwriting. We demonstrate how our algorithm can be used against a public credit dataset to improve model accuracy across different racial groups, while providing theoretically robust nondiscrimination guarantees.
- North America > United States > New York > Kings County > New York City (0.05)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Missouri > Jackson County > Kansas City (0.04)
- (9 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Credit (1.00)
- Banking & Finance > Insurance (0.82)
Hacking a surrogate model approach to XAI
Wilhelm, Alexander, Zweig, Katharina A.
In recent years, the number of new applications for highly complex AI systems has risen significantly. Algorithmic decision-making systems (ADMs) are one of such applications, where an AI system replaces the decision-making process of a human expert. As one approach to ensure fairness and transparency of such systems, explainable AI (XAI) has become more important. One variant to achieve explainability are surrogate models, i.e., the idea to train a new simpler machine learning model based on the input-output-relationship of a black box model. The simpler machine learning model could, for example, be a decision tree, which is thought to be intuitively understandable by humans. However, there is not much insight into how well the surrogate model approximates the black box. Our main assumption is that a good surrogate model approach should be able to bring such a discriminating behavior to the attention of humans; prior to our research we assumed that a surrogate decision tree would identify such a pattern on one of its first levels. However, in this article we show that even if the discriminated subgroup - while otherwise being the same in all categories - does not get a single positive decision from the black box ADM system, the corresponding question of group membership can be pushed down onto a level as low as wanted by the operator of the system. We then generalize this finding to pinpoint the exact level of the tree on which the discriminating question is asked and show that in a more realistic scenario, where discrimination only occurs to some fraction of the disadvantaged group, it is even more feasible to hide such discrimination. Our approach can be generalized easily to other surrogate models.
- North America > United States > New York (0.04)
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.04)
- North America > United States > California > San Francisco County > San Francisco (0.04)
Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation
Kumar, I. Elizabeth, Hines, Keegan E., Dickerson, John P.
Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of "unfairness," thus raising the concern that banks and other financial institutions could -- potentially unwittingly -- engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Iowa (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (14 more...)
On the combination of graph data for assessing thin-file borrowers' creditworthiness
Muñoz-Cancino, Ricardo, Bravo, Cristián, Ríos, Sebastián A., Graña, Manuel
The thin-file borrowers are customers for whom a creditworthiness assessment is uncertain due to their lack of credit history; many researchers have used borrowers' relationships and interactions networks in the form of graphs as an alternative data source to address this. Incorporating network data is traditionally made by hand-crafted feature engineering, and lately, the graph neural network has emerged as an alternative, but it still does not improve over the traditional method's performance. Here we introduce a framework to improve credit scoring models by blending several Graph Representation Learning methods: feature engineering, graph embeddings, and graph neural networks. We stacked their outputs to produce a single score in this approach. We validated this framework using a unique multi-source dataset that characterizes the relationships and credit history for the entire population of a Latin American country, applying it to credit risk models, application, and behavior, targeting both individuals and companies. Our results show that the graph representation learning methods should be used as complements, and these should not be seen as self-sufficient methods as is currently done. In terms of AUC and KS, we enhance the statistical performance, outperforming traditional methods. In Corporate lending, where the gain is much higher, it confirms that evaluating an unbanked company cannot solely consider its features. The business ecosystem where these firms interact with their owners, suppliers, customers, and other companies provides novel knowledge that enables financial institutions to enhance their creditworthiness assessment. Our results let us know when and which group to use graph data and what effects on performance to expect. They also show the enormous value of graph data on the unbanked credit scoring problem, principally to help companies' banking.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Europe > Switzerland > Basel-City > Basel (0.04)
- South America > Uruguay > Maldonado > Maldonado (0.04)
- (8 more...)
- Research Report > New Finding (1.00)
- Research Report > Experimental Study (1.00)
- Banking & Finance > Loans (1.00)
- Banking & Finance > Credit (1.00)
- Information Technology > Information Management (1.00)
- Information Technology > Communications (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.93)
how-can-ai-and-ml-change-the-leading-ecosystem
AI and ML technologies diversify the lending ecosystem seamlessly, efficiently, and effectively. The digitalized world we live in has enabled individuals and businesses to grow and keep ahead of their competition. Many mobile lending apps have exploded in India in recent years due to the increasing accessibility of smartphones. The government encouraged digitization in banking which resulted in financial technology (Fintech), firms racing to fill the gaps, especially in the category of digital loans. Disruptive technologies such as Artificial Intelligence and Machine Learning are gaining popularity in nearly every industry. The financial sector is also a beneficiary of large amounts of data.
This is how AI bias really happens--and why it's so hard to fix – MIT Technology Review
The first thing computer scientists do when they create a deep-learning model is decide what they actually want it to achieve. A credit card company, for example, might want to predict a customer's creditworthiness, but "creditworthiness" is a rather nebulous concept. In order to translate it into something that can be computed, the company must decide whether it wants to, say, maximize its profit margins or maximize the number of loans that get repaid. It could then define creditworthiness within the context of that goal. The problem is that "those decisions are made for various business reasons other than fairness or discrimination," explains Solon Barocas, an assistant professor at Cornell University who specializes in fairness in machine learning. If the algorithm discovered that giving out subprime loans was an effective way to maximize profit, it would end up engaging in predatory behavior even if that wasn't the company's intention.
- North America > United States > Utah (0.05)
- North America > United States > Kentucky (0.05)
Promising Benefits of AI in the Financial Technology Market
Artificial intelligence (AI) is all the rage now. It's impacting numerous industries globally and changing the way we do things. One of the critical industries AI is making strides in is the financial technology "fintech" industry. AI now plays a significant role in facilitating financial services, replacing what required manual work a few years ago. For example, banks now apply AI to assess credit risks with high accuracy.
- Banking & Finance > Trading (1.00)
- Banking & Finance > Insurance (0.71)
- Government > Military > Cyberwarfare (0.32)
Alternative Credit-scoring Provider Finscore Achieves 100% Mobile Subscriber base Coverage in the Philippines
MANILA, Philippines, 16 February 2022 – Philippines-based alternative credit scoring company FinScore has become the first company to determine the creditworthiness of 100% of mobile subscribers in the country-- the highest market reach in the Philippine alternative credit scoring market. This followed the company's successful data-sharing partnership and completion of its integration with G-Xchange Inc. (GXI), allowing the company to calculate telco credit scores for mobile subscribers of industry giant Globe Telecom. FinScore's flagship service enables lenders, including commercial banks, neo banks, buy-now-pay-later (BNPL) platforms, digital lenders, and multi-purpose lenders to determine the creditworthiness of their loan applicants that have little to no credit data. Traditional credit scoring services only analyze a few data variables such as income level, homeownership, job title, and bank history. With over 70% of the unbanked and underbanked adult Filipinos, these traditional methods leave this segment underserved, with reduced ability to get approved for loans.
- Asia > Philippines > Luzon > National Capital Region > City of Manila (0.57)
- Asia > Singapore (0.06)
This is how AI bias really happens--and why it's so hard to fix
Over the past few months, we've documented how the vast majority of AI's applications today are based on the category of algorithms known as deep learning, and how deep-learning algorithms find patterns in data. We've also covered how these technologies affect people's lives: how they can perpetuate injustice in hiring, retail, and security and may already be doing so in the criminal legal system. But it's not enough just to know that this bias exists. If we want to be able to fix it, we need to understand the mechanics of how it arises in the first place. We often shorthand our explanation of AI bias by blaming it on biased training data. The reality is more nuanced: bias can creep in long before the data is collected as well as at many other stages of the deep-learning process.
- North America > United States > Utah (0.05)
- North America > United States > Kentucky (0.05)