Artificial Intelligence and Claim Settlements

I wrote some time ago about concerns I had relating to intervention by insurers. Specifically I was troubled by the scripts that they routinely use to extract admissions from claimants that probably appear innocuous when they are given but take on a new force when they are placed in front of a District Judge.


Some subscribers, but not many in fairness, provided me with some transcripts of intervention calls now relied on by insurers in litigation and I suggested that CHOs and Solicitors should always demand copies of the call recordings in those cases to verify that the transcript, or the minute that the insurer purportedly relies upon, is as it seems.


The purpose of wanting the call recordings was that I had identified an expert linguist, an academic, who was prepared to review the recordings and opine as to whether any such concessions were obtained by coercion or distress. That is something that is rarely evident from reading a written transcript but the emotion in a claimant’s voice, or the persistence in the questioning of the insurer call handler, all adds for a level of granular interpretation. It allows the expert to determine whether the answer that the claimant gave was a considered and honest response to the question asked, or whether they just ‘nodded verbally’ because that’s what a lot of us do especially when we did not expect to be mithered by a call handler and just want the call to end.


I am still keen to roll this project further but the sample size I had in terms of the number of call recordings are insufficient to engage somebody to do the work. As ever, if there is demand amongst the community then I am happy to go again and put some resource and effort behind this.


But why am I keen on this?


The answer is because insurers, and organisations like Verisk and the Davies Group have evolved a tactic which has the potential to morph into a more insidious means of defeating valid and properly presented claims. I cannot yet share the details with you but I had reported to me a few nights ago what I believe to be a novel approach from a top five insurer which was as offensive to the principle of justice as it is possible to be. A claimant represented by a solicitor, where the insurer writes to the claimant directly and reports conversations they have had with his insurer about the reporting of his claim and the services that insurer can provide and then posing additional questions they want him to answer as to why he was not making use of those services and how he appointed the solicitor. Many of you will know my style of presentation and if this document wasn’t being shared with almost 400 people I would be writing another paragraph littered with the F word (and I don’t mean fraud) to highlight the lack of any ethical or professional competence in the approach of that insurer.


Whilst these issues are work in progress for insurers, i.e. intervention and novel attempts to subvert legitimate claims, a quick glance across the Atlantic Ocean to the US market is very helpful in predicting what might happen here next in the battle being waged by insurers to reduce their cost of claims. That’s particularly relevant when both Verisk and the Davies Group are both US owned entities. Notably, Verisk published an article last September in which they argued that “we can envisage the acceleration of self-service automated intelligent claims as the ‘new normal’ – enabling policyholders to report claims, receive automated outcomes and learn about replacement or repair options all from one device.”


In a report by CNN last week, kindly shared with me by Jonathan McKeown of JMK Solicitors, the path which insurers may be about to follow is quite chilling. I have reprinted the article below:


A key part of insurance company Lemonade's pitch to investors and customers is its ability to disrupt the normally staid insurance industry with artificial intelligence. It touts friendly chatbots like AI Maya and AI Jim, which help customers sign up for policies for things like homeowners' or pet health insurance, and file claims through Lemonade's app. And it has raised hundreds of millions of dollars from public and private market investors, in large part by positioning itself as an AI-powered tool.


Yet less than a year after its public market debut, the company, now valued at $5 billion, finds itself in the middle of a PR controversy related to the technology that underpins its services.

On Twitter and in a blog post on Wednesday, Lemonade explained why it deleted what it called an "awful thread" of tweets it had posted on Monday. Those now-deleted tweets had said, among other things, that the company's AI analyses the videos that users submit when they file insurance claims for signs of fraud, picking up "non-verbal cues that traditional insurers can't, since they don't use a digital claims process."


The deleted tweets, which can still be viewed via the Internet Archive's Wayback Machine, caused an uproar on Twitter. Some Twitter users were alarmed at what they saw as a "dystopian" use of technology, as the company's posts suggested its customers' insurance claims could be vetted by AI based on unexplained factors picked up from their video recordings. Others dismissed the company's tweets as "nonsense."


"As an educator who collects examples of AI snake oil to alert students to all the harmful tech that's out there, I thank you for your outstanding service," Arvind Narayanan, an associate professor of computer science at Princeton University, tweeted on Tuesday in response to Lemonade's tweet about "non-verbal cues."


Confusion about how the company processes insurance claims, caused by its choice of words, "led to a spread of falsehoods and incorrect assumptions, so we're writing this to clarify and unequivocally confirm that our users aren't treated differently based on their appearance, behaviour, or any personal/physical characteristic," Lemonade wrote in its blog post Wednesday.


Lemonade's initially muddled messaging, and the public reaction to it, serves as a cautionary tale for the growing number of companies marketing themselves with AI buzzwords. It also highlights the challenges presented by the technology: While AI can act as a selling point, such as by speeding up a typically fusty process like the act of getting insurance or filing a claim, it is also a black box. It's not always clear why or how it does what it does, or even when it's being employed to make a decision.


In its blog post, Lemonade wrote that the phrase "non-verbal cues" in its now-deleted tweets was a "bad choice of words." Rather, it said it meant to refer to its use of facial-recognition technology, which it relies on to flag insurance claims that one person submits under more than one identity — claims that are flagged go on to human reviewers, the company noted.


The explanation is similar to the process the company described in a blog post in January 2020, in which Lemonade shed some light on how its claims chatbot, AI Jim, flagged efforts by a man using different accounts and disguises in what appeared to be attempts to file fraudulent claims. While the company did not state in that post whether it used facial recognition technology in those instances, Lemonade spokeswoman Yael Wissner-Levy confirmed to CNN Business this week that the technology was employed then to detect fraud.


Though increasingly widespread, facial-recognition technology is controversial. The technology has been shown to be less accurate when identifying people of colour. Several Black men, at least, have been wrongfully arrested after false facial recognition matches.


Lemonade tweeted on Wednesday that it does not use and isn't trying to build AI "that uses physical or personal features to deny claims (phrenology/physiognomy)," and that it doesn't consider factors such as a person's background, gender, or physical characteristics in evaluating claims. Lemonade also said it never allows AI to automatically decline claims. But in Lemonade's IPO paperwork, filed with the Securities and Exchange Commission last June, the company wrote that AI Jim "handles the entire claim through resolution in approximately a third of cases, paying the claimant or declining the claim without human intervention".


Wissner-Levy told CNN Business that AI Jim is a "branded term" the company uses to talk about its claims automation, and that not everything AI Jim does uses AI. While AI Jim uses the technology for some actions, such as detecting fraud with facial recognition software, it uses "simple automation" — essentially, pre-set rules — for other tasks, such as determining if a customer has an active insurance policy or if the amount of their claim is less than their insurance deductible.


"It's no secret that we automate claim handling. But the decline and approve actions are not done by AI, as stated in the blog post," she said.


When asked how customers are supposed to understand the difference between AI and simple automation if both are done under a product that has AI in its name, Wissner-Levy said that while AI Jim is the chatbot's name, the company will "never let AI, in terms of our artificial intelligence, determine whether to auto reject a claim."


"We will let AI Jim, the chatbot you're speaking with, reject that based on rules," she added.


Asked if the branding of AI Jim is confusing, Wissner-Levy said, "In this context I guess it was." She said this week is the first time the company has heard of the name confusing or bothering customers.”

All food for thought, and I think the issue of intervention call recordings and the evolving strategy from the top five insurer, if it is an evolving strategy, is something you should absolutely keep an eye on as insurers purport to use Artificial Intelligence and other evolutionary tactics to make life more difficult.

3 views0 comments