lendingclub
Formulating A Strategic Plan Based On Statistical Analyses And Applications For Financial Companies Through A Real-World Use Case
Formulating a strategic plan aligned with a company's business scope allows the company to explore data-driven ways of business improvement and risk mitigation quantitively while utilizing collected data to perform statistical applications. The company's business leadership generally organizes joint meetings with internal or external data analysis teams to design a plan for executing business-related statistical analysis. Such projects demonstrate that the company should invest in what areas and adjust the budget for business verticals with low revenue. Furthermore, statistical applications can determine the logic of how to improve staff performance in the workplace. LendingClub, as a peer-to-peer lending company, offers loans and investment products in different sectors, including personal and business loans, automobile loans, and health-related financing loans. LendingClub's business model comprises three primary players: borrowers, investors, and portfolios for issued loans. LendingClub is about expanding the statistical analytics that consists of infrastructure and software algorithm applications to develop two meaningful solutions ultimately: a) estimating durations in which clients will pay off loans; and b) 30-minute loan approval decision-making. To implement these two capabilities, the company has collected data on loans that were granted or rejected over 12 years, including 145 attributes and more than 2 million observations, where 32 features have no missing values across the dataset.
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- North America > United States > Hawaii (0.04)
- North America > United States > Arkansas (0.04)
- Asia > Bangladesh (0.04)
- Research Report > Experimental Study (0.70)
- Research Report > New Finding (0.46)
- Information Technology > Services (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Banking & Finance > Loans (1.00)
- (2 more...)
Random_Forest_Medium_Article
Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Machine Learning Algorithms have historically been used in the Credit and Fraud Space.
Building a Neural Network to Predict Loan Risk
Until recently (through the end of 2018), LendingClub published a public dataset of all loans issued since the company's launch in 2007. With 2,260,701 loans to look at and 151 potential variables, my goal is to create a neural network model to predict the fraction of an expected loan return that a prospective borrower will pay back. Afterward, I'll create a public API to serve that model. Also, as you may have guessed from the preceding code block, this post is adapted from a Jupyter Notebook. If you'd like to follow along in your own notebook, go ahead and fork mine on Kaggle or GitHub.
Using Machine Learning to Recommend Investments in P2P Lending
Peer-to-peer lending marketplaces like LendingClub and Prosper Marketplace are driven by what is essentially a brokers fee for connecting investors and borrowers. They are incentivized to increase the total number of transactions taking place on their platforms. Driven by ease-of-use, their off-the-shelf credit risk assessments are scored in grouped buckets. On a loan-by-loan basis, this is inefficient given each loan's uniqueness and the sheer amount of data collected from borrowers. Scoring risk on a more granular, continuous basis is not only possible but preferable over discrete, grouped buckets.
- Banking & Finance > Loans (1.00)
- Information Technology > Services > e-Commerce Services (0.72)
New Research Aims to Solve the Problem of AI Bias in "Black Box" Algorithms
From picking stocks to examining x-rays, artificial intelligence is increasingly being used to make decisions that were formerly up to humans. But AI is only as good as the data it's trained on, and in many cases we end up baking our all-too-human biases into algorithms that have the potential to make a huge impact on people's lives. In a new paper published on the arXiv, researchers say they may have figured out a way to mitigate the problem for algorithms that are difficult for outsiders to examine--so-called "black box" systems. A particularly troubling area for bias to show up is in risk assessment modeling, which can decide, for example, a person's chances of being granted bail or approved for a loan. It is typically illegal to consider factors like race in such cases, but algorithms can learn to recognize and exploit the fact that a person's education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.
New Research Aims to Solve the Problem of AI Bias in "Black Box" Algorithms (Technology Review)
New Research Aims to Solve the Problem of AI Bias in "Black Box" Algorithms From picking stocks to examining x-rays, artificial intelligence is increasingly being used to make decisions that were formerly up to humans. But AI is only as good as the data it's trained on, and in many cases we end up baking our all-too-human biases into algorithms that have the potential to make a huge impact on people's lives. In a new paper published on the arXiv, researchers say they may have figured out a way to mitigate the problem for algorithms that are difficult for outsiders to examine--so-called "black box" systems. A particularly troubling area for bias to show up is in risk assessment modeling, which can decide, for example, a person's chances of being granted bail or approved for a loan. It is typically illegal to consider factors like race in such cases, but algorithms can learn to recognize and exploit the fact that a person's education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
New Research Aims to Solve the Problem of AI Bias in "Black Box" Algorithms
From picking stocks to examining x-rays, artificial intelligence is increasingly being used to make decisions that were formerly up to humans. But AI is only as good as the data it's trained on, and in many cases we end up baking our all-too-human biases into algorithms that have the potential to make a huge impact on people's lives. In a new paper published on the arXiv, researchers say they may have figured out a way to mitigate the problem for algorithms that are difficult for outsiders to examine--so-called "black box" systems. A particularly troubling area for bias to show up is in risk assessment modeling, which can decide, for example, a person's chances of being granted bail or approved for a loan. It is typically illegal to consider factors like race in such cases, but algorithms can learn to recognize and exploit the fact that a person's education level or home address may correlate with other demographic information, which can effectively imbue them with racial and other biases.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)