Artificial intelligence is being used for almost everything these days, including insurance.
Companies are using AI to analyze the vast amounts of data they use for actuarial purposes. That can come with both good and bad outcomes.
The Maryland Insurance Administration (MIA) is trying to cut back on some of the downsides and protect consumers with new regulations on companies, asking them to establish oversight, risk management controls and offer certain amounts of transparency about their use of AI.
“The balance that we have to strike as regulators is understanding the risks that come and the risks that come that are unique to this technology,” said Kathleen Birrane, the MIA commissioner. “AI driven technologies, they present their own unique set of risks.
“We are expecting companies to develop a written program that identifies how they are governing use of AI where it relates to consumer decision making, and that they document in writing the risk management protocols and controls that they have in place. We think the approach should be proportionate in the sense that we expect companies to align the degree of control with the degree of harm. It is a very common-sense process.”
AI has seen a sharp uptick in many industries for how it can quickly do administrative tasks and crunch numbers.
Birrane says insurance companies have used the technology for good things like making insurance more accessible.
“AI is being utilized in those kinds of cases as a very specific way in which to make sure that risk is better understood,” she said. “For a very long time in this country no companies offered flood insurance, because it was a risk that nobody wanted to take.”
Now, due to AI’s understanding of flood plains, some commercial flood insurance is now available, Birrane said.
However, AI comes with a downside too. Algorithms have inherent biases that can discriminate against consumers, on purpose or inadvertently, Birrane said.
Some high profile examples of this are certain AI programs not being able to detect Black people’s faces well or Amazon’s hiring software discriminating against women.
Birrane says MIA’s guidelines will set principles that ensure companies think about the implications of their AI and have plans to avoid those issues. The required governance models also ensure that companies will have a level of transparency if MIA decides to investigate any AI issues.
“When we receive a consumer complaint that says this telematics program, this doesn't make any sense, the information is consistently inaccurate then I can go to our market conduct team to that company and say, ‘I want all of your models as they relate to your telematics program. We want to understand what it is that is driving that score,’” she said.
MIA will be able to ask for things like the testing it has in place, the models it uses, how the program is written and what thought went into it.