Advertisements

How strict should government AI rules be for insurance?

by Celia

Some players want new state insurance guidelines on artificial intelligence to fit like a billowy wool caftan, while others want them to fit like a tight steel belt.

The wool versus steel battle is playing out in comments on a new model bulletin being drafted by the National Association of Insurance Commissioners’ Innovation, Cybersecurity and Technology Committee.

Advertisements

The committee posted a second draft of the model bulletin on its section of the NAIC website last week. Comments on the new draft are due by 6 November.

Advertisements

Scott Kosnoff, an insurance law specialist at Faegre Drinker, said in an email interview that the NAIC’s AI regulatory efforts cover much of the same ground as Colorado’s new regulation, which prohibits the use of “external consumer data and information sources” that result in race-based discrimination.

But “the Colorado law takes a prescriptive approach,” Kosnoff said. “The NAIC bulletin sets regulatory expectations rather than requirements.”

What it means: Many of the battles over how life and annuity issuers’ AIs and robots behave may initially look like battles over how loose and flexible the rules should be, rather than what the goals of the rules should be.

All players seem to agree that, in principle, life and annuity issuers should not use AI or other new technologies to discriminate unfairly.

The nuts and bolts: Federal law leaves insurance regulation to the states. The NAIC, a group of state insurance regulators, can set voluntary guidelines but generally cannot impose rules on its own.

The new draft model bulletin is a revision of an earlier version released by the Innovation Committee in July and included in a meeting packet circulated in August.

The bulletin is part of a long-running conversation among regulators, insurers, insurance groups and consumer groups about insurers’ efforts to use new types of data and data analytics in the marketing, underwriting, pricing and administration of life and annuity products.

In 2019, for example, New York sent a letter warning insurers to be prepared to show that any analytical strategies they use in new accelerated life underwriting programmes are reasonable, fair and transparent.

Colorado regulators approved the life insurance nondiscrimination rule in September.

Consumer advocate Birny Birnbaum has been speaking about the need for AI anti-discrimination rules at NAIC events for years.

The NAIC’s new draft bulletin reflects the AI principles the NAIC adopted in 2020.

The arguments: The Innovation Committee has published a series of letters commenting on the first draft bulletin, reflecting many of the issues that have shaped the drafting process.

Sarah Wood of the Insured Retirement Institute was one of the commenters, talking about the reality that insurers may have to make do with what tech companies are willing and able to provide. She urged the committee to “continue to approach this issue in a thoughtful way, so as not to create an environment where only one or two providers are available, while others that may otherwise be compliant are excluded from use by the industry”.

Scott Harrison, co-founder of the American InsurTech Council, welcomed the flexible, principles-based approach evident in the first draft of the bulletin, but suggested that the committee find ways to encourage states to get on the same page and adopt the same standards. “Specifically, we have concerns that a particular AI process or business use case may be considered appropriate in one state and an unfair trade practice in another,” Harrison said.

Advertisements

Michael Conway, Colorado’s insurance commissioner, suggested that the innovation committee might be able to get life insurers themselves to support many types of strong, specific rules. “In general, we believe we have reached a high degree of consensus with the life insurance industry on our governance regulation,” he said. “In particular, an increased emphasis on insurer transparency around decisions made using AI systems that impact consumers could be an area of focus.”

Birnbaum’s Centre for Economic Justice said the first draft of the bulletin was too loose. “We believe that the process-oriented guidance presented in the bulletin will do nothing to improve regulators’ oversight of insurers’ use of AI systems or their ability to identify and stop unfair discrimination resulting from these AI systems,” the centre said.

John Finston and Kaitlin Asrow, deputy superintendents at the New York State Department of Financial Services, supported the idea of adding rigorous, specific, data-driven fairness testing strategies, such as looking at “adverse impact ratios,” or comparisons of rates of favourable outcomes between protected classes of consumers and members of control groups, to identify any disparities.

Advertisements

You may also like

blank

Bedgut is a comprehensive insurance portal. The main columns include commercial insurance, auto insurance, health insurance, home insurance, travel insurance, other insurance, insurance knowledge, insurance news, etc.

[Contact us: [email protected]]

© 2023 Copyright  bedgut.com