- AI ‘must be tempered with the human perspective’ - January 13, 2025
- Mistaking depression for laziness can be a harmful assumption - November 18, 2024
- Good faith approach needed in mental health disability claims - October 21, 2024
By Tony Poland, LegalMatters Staff • While artificial intelligence (AI) can streamline the disability insurance claim process, an overreliance on the technology is fraught with potential pitfalls, says Ontario disability insurance lawyer Courtney Mulqueen.
“I am all for anything that can speed up the application process so that claimants who need an income can get what they are entitled to,” says Mulqueen, principal lawyer of Mulqueen Disability Law Professional Corporation. “However, there are obvious and foreseeable risks to replacing the human element with AI. There would need to be checks and balances.
“I have dealt with insurers for years and my fear is artificial intelligence could very quickly become another tool that they use to deny claims instead of properly assessing and administering them,” she tells LegalMattersCanada.ca.
Mulqueen says the use of AI has gained traction throughout the insurance industry. Artificial intelligence enables insurers to streamline their operations by performing time-consuming and tedious tasks typically done by underwriters and evaluators.
‘The cost savings for insurance companies are obvious’
“The cost savings for insurance companies are obvious,” Mulqueen says. “And there are some advantages to claimants. Many of my clients have experienced extended delays while waiting for a resolution because the people evaluating their claim are overworked.”
Of course, there can also be problems with claims adjusters who have their own biases, she says.
“Sometimes there can be a personality conflict that plays out,” Mulqueen says. “But relying solely on AI to decide whether a claim should be approved is dehumanizing. Especially when you consider a disability is so incredibly personal and unique, not to mention subjective in many cases.”
As well as helping to expedite the claims process, AI algorithms can pick up irregularities in claims submissions and detect potential cases of fraud, proponents say.
“There are certainly good reasons to integrate artificial intelligence into the system,” Mulqueen says. “My concern is how far the insurance industry goes in removing the human element for the sake of the bottom line.”
There seems to be little doubt about the ability of AI to replace humans in the insurance industry, says Mulqueen. She points to a speech during the Future of Insurance 2024 event in Toronto last month. Canadian Underwriter reported brokerage CEO Stephen Billyard observed AI has taken “a whole lot of underwriters’ jobs out of our business.”
“Certainly, that’s not what we’re seeking…but the process for us has become so much more efficient that, in fact, yes, it is eliminating jobs,” Billyard said.
Without human oversight deserving claimants could be denied benefits, Mulqueen says.
Taking the human component out of the equation is a danger
“There is a real danger when you take the human component out of the equation,” she says. “For example, I deal with insurance companies who use medical guidelines for recovery and they are general and fail to take into account the subjective elements of disability . The guideline will say a person should be recovered from an illness or injury by a certain point in time based solely on statistics. They then use those guidelines to terminate claims.
“But at least now there is a human element involved,” Mulqueen adds. “AI bases its assessment solely on the information available to it, such as the medical guidelines. It stands to reason that an artificial intelligence-driven system could lead to an increase in claim denials.”
An outcry about the rise of AI-powered claim denials came to the forefront late last year with the murder of UnitedHealthCare CEO Brian Thompson, who was shot in New York. It was reported that the words “deny,” “defend” and “depose” were written on the shell casings, terms critics say are used to describe how the insurance industry denies claims.
Business news website Quartz stated the murder “sparked public scrutiny of health insurers, especially regarding their use of AI in evaluating claims.”
According to Quartz, a report from the U.S. Senate Permanent Subcommittee on Investigations in October revealed that U.S. insurers have been using AI-powered tools to deny some claims from Medicare Advantage plan subscribers.
The report also stated UnitedHealthCare’s denial rate for post-acute care for people with Medicare Advantage plans rose from 10.9 per cent in 2020 to 22.7 per cent in 2022. The rise coincides with the insurer’s implementation of an AI model, Quartz reported.
More difficult to dispute claims
Mulqueen says she fears that if AI becomes the exclusive evaluator of claims it will be more difficult to dispute claims.
“It could give insurers a huge advantage from a legal perspective,” she says. “Challenging these denials is going to be problematic because AI is going to be relying on information that the insurance company claims is objective. ‘It is AI. It can’t be wrong.’”
Pointing to a newly enacted law in California, Mulqueen says legislation might be needed in Canada to limit the influence of artificial intelligence.
- AI could fill gap left by shortage of mental health professionals
- Important to recognize challenges faced by the LGBTQ+ community
- More can be done to help insurance claimants complete standard forms
MSN reported that while The Physicians Make Decisions Act does not prohibit the use of AI the law mandates that human judgment must remain central to coverage decisions. Artificial intelligence tools cannot be used to deny, delay or alter health care services which are deemed necessary by doctors.
“An algorithm cannot fully understand a patient’s unique medical history or needs and its misuse can lead to devastating consequences,” the law’s primary author, state Sen. Josh Becker, told MSN. “This law ensures that human oversight remains at the heart of health care decisions, safeguarding Californians’ access to the quality care they deserve.”
Nineteen other U.S. states are now looking at similar legislation.
“A law like that in Canada would be helpful to prevent insurance companies from relying exclusively on AI to assess claims,” Mulqueen says. “The way the situation stands now, I believe the insurers derive much more benefit from the use of artificial intelligence than claimants do.”
Ethics guidelines would be helpful
She says ethics guidelines found in the National Library of Medicine report Artificial Intelligence in Evaluation of Permanent Impairment: New Operational Frontiers could help to prevent a depersonalization of the decision-making process.
“While the use of AI models can offer significant advantages in terms of efficiency and standardization of assessment, it is crucial to carefully consider the potential risks and implications arising from this practice,” the report states “The absence of direct human interaction may reduce the individual undergoing assessment to objective and numerical data, disregarding the complexity and uniqueness of the person. It is important to ensure that the use of AI does not result in dehumanizing treatment towards the individuals involved and does not disregard the subjective suffering of the person.
“The issue of transparency and interpretability of AI models emerges as an ethical concern. Since machine learning algorithms often operate in complex and non-linear ways, understanding how decisions are made and which factors influence those decisions can be challenging,” the report adds. “This raises concerns about accountability and the possibility of challenging assessments made by AI.”
The author says it is therefore necessary to ensure that algorithms are “developed transparently and that the individuals involved have access to clear and understandable explanations of the decisions made.”
How information is gathered and used is also another important ethical concern, says Mulqueen.
‘AI may perpetuate and amplify these biases’
“If the data used to train such models are incomplete or biased, AI may perpetuate and amplify these biases, leading to injustices in assessing personal damage,” according to the report. “Ensuring balanced and representative data collection, as well as careful validation of AI models, is essential to avoid systemic discrimination.”
The author says the final decision must remain with an “expert evaluator,” adding it is also “imperative to ensure that AI systems respect the privacy of individuals being assessed and maintain data security.”
Mulqueen says AI has its place but it must not be the final arbiter in health care and disability claims.
“It should be utilized up to a point. AI could certainly help expedite claims by performing tedious tasks such as summarizing medical records,” she says. “To prevent bias or discrimination, AI must be guided by ethical considerations. Systems should be reviewed at regular intervals and expert human oversight will be essential to ensure fairness and prevent the depersonalization of the process, particularly when it comes to assessing subjective or ‘invisible’ disabilities
“AI can be an important tool but, in the end, artificial intelligence must be tempered with the human perspective.”