August 13, 2018

Big Data Is a Big Deal: A Brief Guide to Data Analytics and the Insurance Industry

Admit it: you have a nagging feeling that you should learn more about the insurance industry’s use of big data and algorithms, but you don’t know where to begin. Relax. Here are six questions (and answers) to help you get started.

What’s the big deal with big data and algorithms?

Insurers are collecting vast amounts of data and then using mathematical models called algorithms to make important decisions based on the data. The data may come from a variety of sources, such as social media, telematics devices in cars, public records, motor vehicle reports, credit information and wearable devices (like Fitbits). Some insurers develop their own algorithms, while others purchase algorithms from third-party vendors.

The allure of big data and algorithms is that they enable insurers to discover previously unknown connections, patterns and trends, which can facilitate better (and faster) marketing, underwriting, rating and claims decisions and root out potential fraud. Read that sentence again, because it explains why the industry is pouring millions of dollars into insuretech.

All of this is pretty cool, but there are significant regulatory, litigation and reputational risks to keep in mind. In particular, using big data and algorithms can reinforce and perpetuate discrimination in ways neither intended nor foreseen by your (or your client’s) data team.

What do regulators think?

Insurance regulators want to be supportive of innovation, but they also want to make sure that innovation doesn’t harm consumers or violate existing laws. As Wisconsin Insurance Commissioner Ted Nickel explained at the massive InsureTech Connect conference held last fall, “From a regulator’s perspective, you need to understand how things work. Your mind goes to dark places when you don’t understand.”

What could possibly go wrong?

Several things.

Data can be incomplete, inaccurate or outdated. It also can contain embedded bias in ways that are not always obvious. For example, let’s say that we want to develop a computer algorithm that would identify potential presidential candidates scientifically and objectively, free of the biases and prejudices that can creep into human decision making. A reasonable place to start would be to agree on some definition of success (like growing the economy, backing landmark legislation, keeping us safe, whatever). We would then look at the data on past presidents, determine which presidents met our definition of success, identify characteristics that all or most of the successful presidents had in common, and use these predictors of success to identify potential candidates worthy of consideration. Guess what? No matter how we define success, the computer will almost certainly tell us to look for a white Protestant man, because of the historical biases embedded in our data.

Sometimes the problem lies with the algorithm, rather than the data. Algorithms may fail to produce results that are reliably accurate. (Even an algorithm that’s right much of the time won’t always be correct.) Other algorithms are so complex that the developers may not be able to tell regulators or consumers how a particular decision was reached. In addition, algorithms may base their decisions on race or other factors that cannot lawfully be taken into account. (The challenge is that certain data that feeds into the algorithm could be highly correlated with race or another prohibited factor, but the correlation wouldn’t necessarily be evident without testing.)

These problems with data and algorithms could harm consumers, and the harm could be widespread.

How has the National Association of Insurance Commissioners (NAIC) responded to big data challenges?

The concerns articulated by Commissioner Nickel last year are precisely why the NAIC formed a Big Data Working Group. The mission of the Working Group is “to assist state insurance regulators in obtaining a clear understanding of what data is collected, how it is collected, and how it is used by insurers and third parties in the context of marketing, rating, underwriting, and claims.” The initial focus is on auto and homeowner’s insurance, but regulators have made it clear that they will soon look at other lines. In fact, the Working Group heard a presentation on life insurers’ use of big data at the Boston NAIC meeting in August.

One of the first things the Working Group did was to research the existing laws addressing insurers’ use of consumer and non-insurance data, particularly as it relates to rating and claims handling. Key findings included that:

  • Insurers cannot refuse to insure or limit the amount of coverage available to an individual because of the sex, marital status, race, religion or national origin of the individual.
  • Rates can’t be excessive, inadequate or unfairly discriminatory. (A rate is unfairly discriminatory if differences in price do not fairly reflect differences in expected losses and expenses.)
  • Risk classifications cannot be based on the race, creed, national origin or the religion of the insured.

The Big Data Working Group will consider whether additional consumer protections are warranted, but the key takeaway is that existing law provides regulators with the basic tools they need to address insurers whose data or algorithms run amuck.

What does the future hold?

In late July, NPR aired a story on how health insurers’ use of big data could harm consumers. It was exactly the sort of story that can get the attention of regulators, legislators and class action lawyers. There will be more.

Like it or not, regulators’ questions and concerns about big data and algorithms are not going away. Sooner or later, the NAIC will come up with a way to protect consumers without overly stifling innovation. And, if the regulators are slow to act, there’s a good chance that the plaintiffs’ lawyers will make some noise of their own.

It’s possible that states will enact data privacy legislation that, while not specifically aimed at insurers, nevertheless could significantly impact their ability to collect and use consumer data. The California Consumer Privacy Act of 2018 (signed into law on June 28) is a good example, as many are calling it the strictest online privacy law in the country. Depending on how the midterm elections shake out, it’s even possible that Congress could take up legislation — perhaps inspired by Europe’s General Data Protection Regulation (GDPR) or the latest Facebook revelation — that could be broad enough to impact insurers.

What should insurers do?

First, know what the law requires and keep up with developments at the NAIC and elsewhere.

Second, take a hard look at your company’s use of data and algorithms. Deficiencies in data and algorithms present regulatory, litigation and reputational risk, but can be difficult to detect. We think there’s enough risk here that insurers should consider independent testing and validation of their data and algorithms to identify any problems before they come home to roost.

Here are some of the questions that insurers should consider asking themselves:

  • Are we permitted to use the information that we’re collecting?
  • Is our data accurate, complete, up-to-date and free of embedded bias?
  • Does our algorithm produce reliably accurate results?
  • Do the results make sense? Can we explain them in a way that regulators and consumers will understand?
  • Are we monitoring the performance of our algorithm to make sure that it continues to operate as intended?
  • Does our algorithm comply with the law? In particular, does it include proxies for race or other factors that cannot lawfully be taken into account?

That’s probably enough for now, but stay tuned. There’s surely more to come.

A version of this article was also published in the Federation of Regulatory Counsel Journal.

The material contained in this communication is informational, general in nature and does not constitute legal advice. The material contained in this communication should not be relied upon or used without consulting a lawyer to consider your specific circumstances. This communication was published on the date specified and may not include any changes in the topics, laws, rules or regulations covered. Receipt of this communication does not establish an attorney-client relationship. In some jurisdictions, this communication may be considered attorney advertising.

Related Legal Services

Related Industries

The Faegre Drinker Biddle & Reath LLP website uses cookies to make your browsing experience as useful as possible. In order to have the full site experience, keep cookies enabled on your web browser. By browsing our site with cookies enabled, you are agreeing to their use. Review Faegre Drinker Biddle & Reath LLP's cookies information for more details.