Springe direkt zu Inhalt

Computers Take the Place of Experts

Legal scholars at the Department of Law at Freie Universität work on questions of artificial intelligence and the challenges it poses in insurance law.

Dec 16, 2019

Artificial intelligence could play an increasingly important role in decisions on the amount of time, effort, and expense insurers invest in assessing claims.

Artificial intelligence could play an increasingly important role in decisions on the amount of time, effort, and expense insurers invest in assessing claims.
Image Credit: Shutterstock / tommaso79

Your assistant in the event of a claim – that’s how one German insurance company, VHV, advertises its car insurance app for smartphones. After an accident, the program is supposed to help the insured party settle the claim: The user enters the other person’s data, the app uses GPS to determine the location, date, and time of the accident, and users can even choose to upload photos of the damage as evidence right away.

Most insurers still have agents sitting at a computer, assessing claims based on experience when someone suffers body damage. There is one key decision to make: Does an expert have to determine the amount of the claim? Or is it better to settle a minimal claim generously, and yet at low cost?

“Insurance is undergoing a major shift right now,” says Christian Armbrüster, a professor specializing in the law of private insurance at the Department of Law at Freie Universität Berlin. He and doctoral candidate Jonathan Prill are in charge of a research project titled “Artificial Intelligence and Insurance,” which is being financed by the German Association for Insurance Science (Deutscher Verein für Versicherungswissenschaft) and also involves the participation of King’s College, in London, as an international partner.

Artificial intelligence (AI) has become a fixture in our world today. Computer programs help to find tumors in X-ray images, and on-board computers in cars and aircraft make autonomous decisions based on data according to pre-specified scenarios. Understood broadly, the term “artificial intelligence” encompasses all automated solutions that simulate human behavior, from visual pattern recognition to neural networks.

So far, the aspect most important to insurance practice has been algorithms that help agents with things like claims processing and advising prior to entering into a contract, along with setting premiums and forecasting risk. “This also throws up interesting legal questions time and again,” Prill says. “Who owns the data collected by something like an on-board computer in a car or a fitness tracker, and who is allowed to access them? How transparent does the use of AI by insurance companies have to be?” Prill sees this as an exciting field, and one with bright prospects for the future.

Self-learning algorithms often behave differently than expected

The background is not only data protection and privacy law, but also the tight regulations that apply in the insurance sector. The German Federal Financial Supervisory Authority (BaFin) oversees compliance with the legal provisions on aspects such as advising obligations and calculation of premiums. How are premiums set? How are losses calculated? “In the case of automatic claims adjustment, transparency reaches its limits, because the insurer won’t always be able to explain what the AI decided and why,” Prill says.

One particular challenge in this regard is artificial intelligence in a narrower sense – self-learning algorithms. These programs review the decisions they have made, learning over time from their mistakes and using this information to optimize their outcomes.

In the process, they often develop behavior that their programmers did not expect, since the programmers do not know which specific parameters go into the computer’s decision. “It’s like a baby or toddler,” Prill explains. “In most cases, adults can anticipate how the child will respond in a certain situation – and yet, a child can still behave in surprising ways at times because that child processes experiences differently than we would have thought.”

The specifications of the Court of Justice of the European Union (CJEU) also present tough challenges for software developers, as sometimes, statistics and law make for an uneasy fit. For example, the CJEU has prohibited insurers from charging different premiums by gender. This means that despite the fact that women are proven to cause less damage when driving, they can no longer receive cheaper car insurance rates than men.

In turn, private pension insurers are forbidden to offer men lower premiums than women – although in statistical terms, women live longer, so they give rise to higher pension expenses. “The reason given for this was that women’s higher life expectancy does not necessarily depend on their sex, but rather their behavior. In statistical terms, women go for check-ups more regularly, drink less alcohol, and eat a healthier diet,” Armbrüster explains. “But according to this case law, gender cannot lead to different treatment, even indirectly, unless the insurer can prove that there is an objective justification for it. But computers don’t capture any intention to discriminate – this is a vulnerable point for them.”

For example, it is possible to prove that self-learning algorithms confronted with racist statements will eventually come to view this racism – if the statements are frequent enough – as “normal” and no longer consider it negative in their specifications

And there’s another human capability that the world of data lacks, too: empathy. “Agents sometimes decide in an insured party’s favor out of goodwill or leniency,” Armbrüster explains. “Factors like past claims history or the number of policies the customer has with that insurer can be taken into account in an algorithm. But what about individual predicaments? Honesty? Sympathy? That’s where AI hits its limits, because it has to rely on history, so it is only controlled in associative terms. Humans, by contrast, can respond intuitively to completely new situations.”

A computer can’t do that, at least not yet. But researchers are working on that, too, Armbrüster says, for example to allow AI to analyze the voice of someone calling to report a claim. Some may welcome this development as rationalizing decision-making processes, while others criticize the loss of human communications, but whatever the case, in the area of goodwill – gestures of accommodation that are not required in legal terms – even the law can hardly set limits on technical development, aside from prohibiting discrimination.


This text originally appeared in German on December 7, 2019, in the Tagesspiegel newspaper supplement published by Freie Universität.

.