Potential Bias in AI Consumer Decision Tools Eyed by FTC, CFPB
Given the growing use of artificial intelligence (AI) and automated decision-making tools in consumer-facing decisions, we expect federal regulators in 2022 to continue their recent track record of interest in potential discrimination and unfairness, as well as data accuracy and transparency. Significant technological developments in these areas and the increasing use of data analytics to make automated decisions will likely result in further regulatory action this year in three key areas: (1) assessing whether AI and algorithms are excluding particular consumer groups in an unfair and discriminatory manner, whether intentionally or not; (2) evaluating whether collected data accurately reflects real-world facts and whether companies are giving consumers an opportunity to correct mistakes; and (3) assessing whether automated decisionmaking tools are being used in a transparent manner. Over the last year, federal regulators with enforcement authority in the consumer space--the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB)--have expressed their intention to continue enforcement efforts. The FTC has identified "technology companies and digital platforms," "bias in algorithms and biometrics," and "deceptive and manipulative conduct on the Internet" as among its top enforcement priorities for the coming years, and directed staff to use compulsory processes to demand documents and testimony to investigate potential abuses in these areas. The FTC and the CFPB have each initiated or continued investigations into practices involving the collection of consumer data and the use of data analytics in consumer decisions, including the use of AI and algorithms by financial institutions, digital payment platforms, and social media, and video streaming firms.
Feb-6-2022, 01:40:12 GMT
- Country:
- North America > United States (1.00)
- Industry:
- Government > Regional Government
- Law (1.00)
- Technology: