Algorithmic or automated decision systems use data and statistical analyses to classify people for the purpose of assessing their eligibility for a benefit or penalty. Such systems have been traditionally used for credit decisions, and currently are widely used for employment screening, insurance eligibility, and marketing. They are also used in the public sector, including for the delivery of government services, and in criminal justice sentencing and probation decisions. Most of these automated decision systems rely on traditional statistical techniques like regression analysis. Recently, though, these systems have incorporated machine learning to improve their accuracy and fairness. These advanced statistical techniques seek to find patterns in data without requiring the analyst to specify in advance which factors to use. They will often find new, unexpected connections that might not be obvious to the analyst or follow from a common sense or theoretic understanding of the subject matter. As a result, they can help to discover new factors that improve the accuracy of eligibility predictions and the decisions based on them.
The financial services industry has seen an explosive growth in Artificial Intelligence (AI) to supplement, and often supplant, existing processes both customer-facing and internal. Given the potential created by rapid advancements in AI sophistication and functionality, more and more financial services firms are leveraging the technology to deploy new use cases for improved decision-making processes – particularly in the areas of anti-money laundering, fraud prevention, risk management, and lending. While the first wave of AI was generally focused on automating manually-intensive and repetitive tasks, banks are now turning to machine learning systems (ML) to uncover more dynamic ways of interpreting their vast swaths of customer data. Whereas AI, at a fundamental level, permits a machine to imitate intelligent human behavior, ML is a specific application (or subset) of AI that enables systems automatically to learn and improve – e.g., reduce errors or maximize the likelihood that their predictions will be true – without being explicitly programmed to make such adjustments. This development has an exciting potential to expand the products available to underbanked communities and improve services and customer experience as a whole.
More and more entities are deploying machine learning and artificial intelligence to automate tasks previously performed by humans. Such efforts carry with them real benefits, such as the enhancement of operational efficiency and the reduction of costs, but they also raise a number of concerns regarding their potential impacts on human society, particularly as computer algorithms are increasingly used to determine important outcomes like individuals' treatment within the criminal justice system. This mixture of benefits and concerns is starting to attract the interest of regulators. Efforts in the European Union, Canada, and the United States have initiated an ongoing discussion around how to regulate "automated decision-making" and what principles should guide it. And while not all of these regulatory efforts will directly implicate private companies, they may nonetheless provide insight for companies seeking to build consumer trust in their artificial intelligence systems or better prepare themselves for the overall direction that regulation is taking.
Lawmakers want to make sure the algorithms companies use to target ads, recruit employees and make other decisions aren't inherently biased against certain people. Sens. Ron Wyden, D-Ore., and Cory Booker, D-N.J., on Wednesday introduced legislation that would require organizations to assess the objectivity of their algorithms and correct any issues might unfairly skew their results. As society depends on tech to make increasingly consequential decisions, the Algorithmic Accountability Act aims to create a level playing field for people of all backgrounds. Rep. Yvette Clarke, D-N.Y., introduced a companion bill in the House. Under the act, the Federal Trade Commission would compel companies to test both their algorithms and training data for any shortcomings that could lead to biased, inaccurate, discriminatory or otherwise unfair decisions.