cfpb
The Government Wants to Protect Robux From Hackers
The Consumer Financial Protection Bureau proposed a new measure on Friday that could protect your Robux from scammers and hackers. The proposed rule would interpret terms in the Electronic Fund Transfer Act, or EFTA, which has traditionally been used to protect consumers from unauthorized debit transactions, to include some virtual currencies supplied by gaming and cryptocurrency companies. "Gamers--or in some cases their parents and guardians--have reported issues such as trouble when converting dollars to in-game currency, unauthorized transactions, account hacks and takeovers, theft, scams, and loss of assets," reads the CFPB's post announcing the proposal. "They have also described receiving limited to no help from gaming companies and the banks or digital wallets involved. Refunds are often denied, people are finding their gaming accounts suspended by the video game company after a player tries to get a refund from their financial institution, or people are left caught in doom loops with AI-powered customer service representatives while they're just trying to get straight answers."
- Leisure & Entertainment > Games > Computer Games (1.00)
- Banking & Finance (1.00)
- Government > Regional Government > North America Government > United States Government (0.37)
- Information Technology > Artificial Intelligence (0.96)
- Information Technology > e-Commerce (0.93)
AI Chatbots Are Causing Bank Customers Headaches - CNET
The Consumer Financial Protection Bureau issued a warning on Tuesday on generative AI chatbots being used by banks. The agency says it's received "numerous" complaints from customers who have interacted with the chatbots and have failed to receive "timely, straightforward" answers to their questions. "Working with customers to resolve a problem or answer a question is an essential function for financial institutions – and is the basis of relationship banking," the agency said in its press release. AI chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, CFPB said. Artificial intelligence chatbots could run the risk of providing inaccurate financial information to customers or infringe on their privacy and data, the CFPB said.
Improving Accented Speech Recognition with Multi-Domain Training
Maison, Lucas, Estève, Yannick
However, CFPB [12] 4:07 6132 9 Belgian they still lack generalization capability and are not robust to domain shifts like accent variations. In this work, we use Table 1. Statistics for the datasets (duration in hours) speech audio representing four different French accents to create fine-tuning datasets that improve the robustness of pre-trained ASR models. By incorporating various accents in it is possible to add noise to the training data, modify voice the training set, we obtain both in-domain and out-of-domain speed, or transform voice by manipulating the vocal-source improvements. Our numerical experiments show that we can and vocal-tract characteristics [4]. Other approaches include reduce error rates by up to 25% (relative) on African and applying speaker normalization or anonymization methods in Belgian accents compared to single-domain training while a reverse manner, for example using Vocal Tract Length Perturbation keeping a good performance on standard French.
- Africa > Republic of the Congo (0.05)
- Africa > Niger (0.05)
- Africa > Gabon (0.05)
- (5 more...)
Fintech Industry Must Transform to Help Underserved Communities
Alternative credit options can mean the difference between financial well-being and financial hardship for many borrowers. Fintech advancements such as buy-now-pay-later, plus the combination of credit models driven by artificial intelligence and machine learning, may pave the way for a fairer and more inclusive future of credit. But lessons from the financial crisis ring clear: When only one part of the market is required to comply with regulations, the other will compete by offering disadvantageous and risky products. Regulators are now faced with how to advance a regulatory framework that encourages innovation while protecting consumers. Buy now/pay later options spurred marked industry growth, as well as artificial intelligence and machine learning advances during the pandemic, with implications and improved assistance for underserved communities.
Equalizing Credit Opportunity in Algorithms: Aligning Algorithmic Fairness Research with U.S. Fair Lending Regulation
Kumar, I. Elizabeth, Hines, Keegan E., Dickerson, John P.
Credit is an essential component of financial wellbeing in America, and unequal access to it is a large factor in the economic disparities between demographic groups that exist today. Today, machine learning algorithms, sometimes trained on alternative data, are increasingly being used to determine access to credit, yet research has shown that machine learning can encode many different versions of "unfairness," thus raising the concern that banks and other financial institutions could -- potentially unwittingly -- engage in illegal discrimination through the use of this technology. In the US, there are laws in place to make sure discrimination does not happen in lending and agencies charged with enforcing them. However, conversations around fair credit models in computer science and in policy are often misaligned: fair machine learning research often lacks legal and practical considerations specific to existing fair lending policy, and regulators have yet to issue new guidance on how, if at all, credit risk models should be utilizing practices and techniques from the research community. This paper aims to better align these sides of the conversation. We describe the current state of credit discrimination regulation in the United States, contextualize results from fair ML research to identify the specific fairness concerns raised by the use of machine learning in lending, and discuss regulatory opportunities to address these concerns.
- North America > United States > New York > New York County > New York City (0.14)
- North America > United States > Iowa (0.04)
- Oceania > Australia > New South Wales > Sydney (0.04)
- (14 more...)
Fair Lending: Using AI to democratize compliance - CUInsight
In its most recent advisory, the CFPB addressed a critical question – "When creditors make credit decisions based on complex algorithms that prevent creditors from accurately identifying the specific reasons for denying credit or taking other adverse actions, do these creditors need to comply with the Equal Credit Opportunity Act's requirement to provide a statement of specific reasons to applicants against whom adverse action is taken? The answer is an obvious'Yes'. With the CFPB's circular reminding everyone of adverse action notice requirements under the ECOA Act, some credit unions find themselves in a quandary when it comes to explaining their credit decisions, which is perceived to be difficult when they use state of the art decisioning algorithms. However, modern AI solutions have moved beyond mere aspects of explainability to enable fair lending, and have gone the extra mile to remove inherent biases that may arise in data based models. Nonetheless, it is necessary to understand the CFPB's guidance and how AI can effectively be a solution itself. The use of algorithms in making lending decisions is not something novel or new. Credit risk assessment naturally requires getting your arms around as much relevant data as you can. A mix of models and algorithms have been the backbone of credit decisions for around 4 decades now, with credit analysts using financial statements, credit histories, and other data sources to estimate credit risk, set credit limits and recommend payment plans. With time, the datasets in question have become so voluminous that lenders had to move from manual methodologies to computational models for analysis of data using analytics. Recent advancements in computational methods have introduced the "AI" element in lending processes to make credit risk assessments much more accurate. Artificial Intelligence and Machine Learning models leverage a diverse set of alternate data sources beyond bureau, and use historical training data to determine non-linear correlations between data points, and provide advanced predictive signals on member behavior and lending outcomes. The unique proposition here is the ability of AI/ML models to analyze voluminous quantities of data, detect hitherto unknown correlations, and keep self-learning and adapting the models with little or no manual interventions. AI enabled technologies have helped put the spotlight on the increasingly visible disparities in existing lending processes. A 2019 paper by Robert Bartlett & Co. helps quantify this disparity: "Black and Latino applicants receive higher rejection rates of 61% compared to 48% for other races.
How Banks Can Shed Light on the 'Black Box' of AI Decision-Making
The use of artificial intelligence technology in banking has great potential, much of it still untapped. It's use in powering chatbots and digital assistants using natural language processing is of the best-known AI applications. AI can also be used as part of data analytics, helping banks and credit unions detect fraud more quickly on the one hand and create more personalized customer messaging and offers on the other. Significantly, AI can help make institutions -- bank and nonbank -- make faster lending decisions. However, there is downside to the use of artificial intelligence, the consequences of which loom ominously for banks and credit unions.
- Banking & Finance (1.00)
- Transportation > Air (0.43)
Risks Lurk in AI Tools that Marketers Increasingly Rely On
When most bankers think of artificial intelligence and machine learning, they likely think of underwriting models and chatbots. However, the potential uses of these technologies keep growing, virtually limitless. Unfortunately, many in the banking industry don't know how these new technologies are being used in financial services. For example, marketing and business development teams may not recognize the artificial intelligence and machine learning elements driving many of their own tools, and likely do not fully understand the risks the technology could pose to the organization. Compliance considerations may not be consistently included in key decisions and processes that could have a big impact on the organization's risk profile as artificial intelligence and machine learning proliferate.
Potential Bias in AI Consumer Decision Tools Eyed by FTC, CFPB
Given the growing use of artificial intelligence (AI) and automated decision-making tools in consumer-facing decisions, we expect federal regulators in 2022 to continue their recent track record of interest in potential discrimination and unfairness, as well as data accuracy and transparency. Significant technological developments in these areas and the increasing use of data analytics to make automated decisions will likely result in further regulatory action this year in three key areas: (1) assessing whether AI and algorithms are excluding particular consumer groups in an unfair and discriminatory manner, whether intentionally or not; (2) evaluating whether collected data accurately reflects real-world facts and whether companies are giving consumers an opportunity to correct mistakes; and (3) assessing whether automated decisionmaking tools are being used in a transparent manner. Over the last year, federal regulators with enforcement authority in the consumer space--the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB)--have expressed their intention to continue enforcement efforts. The FTC has identified "technology companies and digital platforms," "bias in algorithms and biometrics," and "deceptive and manipulative conduct on the Internet" as among its top enforcement priorities for the coming years, and directed staff to use compulsory processes to demand documents and testimony to investigate potential abuses in these areas. The FTC and the CFPB have each initiated or continued investigations into practices involving the collection of consumer data and the use of data analytics in consumer decisions, including the use of AI and algorithms by financial institutions, digital payment platforms, and social media, and video streaming firms.
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Information Technology > Artificial Intelligence > Applied AI (0.58)
- Information Technology > Communications > Social Media (0.36)
CFPB warnings of bias in AI could spook lenders
Rohit Chopra has seized on nearly every public opportunity as director of the Consumer Financial Protection Bureau to admonish companies about the potential misuse of artificial intelligence in lending decisions. Chopra has said that algorithms can never "be free of bias" and may result in credit determinations that are unfair to consumers. He claims machine learning can be anti-competitive and could lead to "digital redlining" and "robo discrimination." The message for banks and fast-moving fintechs is loud and clear: Enforcement actions related to the use of AI are coming, as is potential guidance tied to what makes alternative data such as utility and rent payments risky when used in marketing, pricing and underwriting products, experts say. "The focus on artificial intelligence and machine learning is explicit," said Stephen Hayes, a partner at Relman Colfax PLLC and a former CFPB senior counsel.