Ethical Use of AI in Insurance Modeling and Decision-Making

#artificialintelligence 

With increased availability of next-generation technology and data mining tools, insurance company use of external consumer data sets and artificial intelligence (AI) and machine learning (ML)-enabled analytical models is rapidly expanding and accelerating. Insurers have initially targeted key business areas such as underwriting, pricing, fraud detection, marketing distribution and claims management to leverage technical innovations to realize enhanced risk management, revenue growth and improved profitability. At the same time, regulators worldwide are intensifying their focus on the governance and fairness challenges presented by these complex, highly innovative tools – specifically, the potential for unintended bias against protected classes of people. In the United States, the Colorado Division of Insurance recently issued a first-in-the-nation draft regulation to support the implementation of a 2021 law passed by the state's legislature.1 This law (SB21-169) prohibits life insurers from using external personal data and information sources (ECDIS), or employing algorithms and models that use ECDIS, where the resulting impact of such use is unfair discrimination against consumers on the basis of race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity or gender expression.2

Duplicate Docs Excel Report

Title
None found

Similar Docs  Excel Report  more

TitleSimilaritySource
None found