The use of AI platforms such as ChatGPT is exponentially on the rise and there is potential for the captive industry to use the platform. However, the risks shouldn’t be taken lightly
Amid public concern about the rapid development of artificial intelligence (AI) platforms, many industries have been left wondering whether AI platform ChatGPT could benefit them.
Insurtech within captive insurance is no exception. In this competitive landscape, it is essential for captive insurers to embrace new tools and methods to increase the efficiency of their business operations.
However, the industry has witnessed a general trend of insurance companies slow to expand their technological capabilities.
OpenAI’s ChatGPT is a chatbot that uses a generated pre-trained transformer (GPT) to absorb massive amounts of data. It then processes natural language to provide users with a human-like response. It has more than 100 million users, according to Reuters (as of January 2023).
Captive insurers can harness new technologies and help organisations innovate, but these efforts must be aligned with the goals of the parent company.
Alex Gedge, senior captive consultant at Hylant, comments: “I’ve not seen much adoption of ChatGPT in the industry as yet, though there has certainly been discussion. Brokers and consulting teams are researching and exploring ways to adapt and adopt new technologies. This research is ongoing, particularly in captives where there is a focus on emerging risks and creating insurance solutions for markets that do not have them yet. For example, where a client has a very specific risk which is not industry wide.”
Going straight to the subject source itself, this writer asked ChatGPT: ‘How can ChatGPT help captive insurers?’. It generated a comprehensive answer in 15 seconds.
ChatGPT gives five examples: risk management, underwriting support, claims processing, customer service and data analysis as areas in which it can help captive insurers.
For risk management, ChatGPT writes that it “can assist captive insurers in assessing and managing risks. It can provide insights on risk identification, quantification and mitigation strategies. By analysing historical data and industry trends, ChatGPT can help captive insurers make informed decisions regarding risk exposure and insurance coverage.”
For claims processing, it “can support captive insurers in claims processing by providing ‘quick and accurate’ responses to common inquiries and automating routine tasks.”
“ChatGPT can assist in the underwriting process by providing guidance on policy terms, conditions, and pricing. It can analyse information provided by applicants, assess risk factors and offer recommendations to underwriters. It can also serve as a ‘virtual assistant’, addressing customer enquiries about policies, premiums and coverage options.”
For data analysis, “ChatGPT can analyse vast amounts of data to enable captive insurers to make data-driven decisions.”
The chatbot’s answer is synonymous with that of Bård Myrstad, CEO of Simplifai, as he describes how insurers can utilise his company’s recently launched platform, InsuranceGPT.
“We’ve been able to automate the first line of case reduction in the claim settlement process for new claims received. The platform is then able to look through and classify them accordingly through looking at the attachments and information submitted. It can then validate the claims against decision criteria.
InsuranceGPT is an AI GPT tool launched by Norwegian tech company Simplifai, designed specifically for insurers.
It claims to be the first of its kind, offering ChatGPT-like enhanced decision-making for automated claims management while maintaining a higher degree of privacy and data security than a general AI platform.
Randy Sadler, principal at CIC Services, considers platform adoption at his company. “We use ChatGPT at CIC Services, and other firms are also using it,” he says.
“We’re currently using it for outreach-related content, but we’re largely only using it in places where errors wouldn’t have severe consequences.”
He adds: “For captives, AI may prove particularly useful when drafting unique insurance policy forms with very specific conditions for coverage and exclusions. In the future, we expect to be able to create much better client interfaces — for example, interfacing with a chatbot that answers questions about a client’s captive and can even price, adding new policies on the spot.”
A wider outlook
When answering what problem he is trying to solve within the industry, Simplfai’s Myrstad takes a wider societal approach: “If you look at the underlying trends in society, it’s demography. We’re getting older, we’re living longer. Some people consider people in their 20s to be children and most of us will live beyond 75; the ratio of workers to retired people is changing significantly.
“This means that to sustain living standards, each worker needs to produce more value. That’s even before we start having to free up capacity to tackle all the new engineering challenges we will have to manage climate change and poverty, something that will affect all industries.
“There will be less people, and less people to do what needs to be done. It will be more and more expensive. What we offer organisations is a way to create more value from human employees. It’s akin to every employee having a super-efficient assistant in a permanent home office.
“Except you never have to engage this ‘assistant’. You receive information from [Simplifai’s AI platform InsuranceGPT] and will be able to offload a lot of your routine tasks.”
Bottom of the calculus class?
With AI-development still in its early stages, there has been widespread criticism around errors and data accuracy. For example, a factual error from Google’s chatbot Bard, rival to OpenAI’s ChatGPT, cost the technology company approximately US $100 billion.
In a promotional clip shared by Google back in February, Bard is asked: “What new discoveries from the James Webb Space Telescope (JWST) can I tell my nine-year-old about?” Bard answers with several bullet points, one of which reads: “JWST took the very first pictures of a planet outside of our own solar system.” Astrologists online were quick to point out that this was an error. Subsequently, shares for Google’s parent company Alphabet fell by 7.7 per cent.
When considering the barriers to insurers adopting InsuranceGPT and how his product addresses problems, Myrstad explains: “First of all, new technology makes a lot of people nervous. The main issue is data compliance. What is becoming more apparent with these large platforms is that with endless innovations and mass amounts of generative data processing, the reliability of the information on individual topics is reduced. One thing we are vigilant on is data privacy. We are partnering with insurers to make our product industry-specific, which allows the platform to be much more accurate and reliable than a generic model. By working with insurance industry partners, we’ve been able to develop this product with the real needs of the industry in mind and have been able to pinpoint where we can create the most value, weighted against risks.”
Giving a captive-specific take, Sadler adds: “Generative AI, like ChatGPT, is still a very recent technology. The biggest barrier is captive insurers not understanding it or seeing how it can help. The second biggest barrier is mistakes. Implementing [generative AI] involves training, new hardware and software and input from technology consultants, especially since it may not be compatible with existing systems.
“There are also regulatory concerns, with the captive industry being heavily variant on different regulatory requirements for domiciles which ChatGPT may not consider. Captive insurers must ensure compliance with applicable laws,” Sadler affirms.
Gedge concurs: “With any new technology, the first port of call is always education: making sure people are comfortable enough with it to use it. The second concern is security. Whilst at the moment Chat-GPT is open source, there will be security concerns with tracing and potential breaches, as with any cybertechnology.
“For captives, another issue is that so much captive knowledge is not publically available, so there will be some limited use to machine learning sources that access public data. This means there will be limits, and potential misunderstandings, due to niche industry-speak and with tweaks in phrasing.”
Finding the right balance
Sadler notes: “There are risks associated with utilising ChatGPT and it’s reasonable for captive insurers to be hesitant to use this technology. For captive insurers, it’s important to understand that ChatGPT responses are based on the data it’s trained on, which only includes data up until September 2021. If there are developments in the captive insurance industry since then, ChatGPT may not be aware of it.”
Data analysis by AI platforms is only as reliable as data inputted by users, specifically captive insurers.
In this highly-regulated industry, it almost goes without saying that industry insurers should avoid using ChatGPT for tasks subject to regulatory requirements. Some captive insurers may need further education and training to use AI models for higher risk functions.
According to the 2021 KPMG CEO Outlook survey, 68 per cent of insurance CEOs say they will focus on customer-centric technologies such as chatbots — evidence that high-level insurance industry participants recognise the vast potential for use in the industry.
Beyond its potential for data analysis, a UK broker working with captive insurers is using ChatGPT’s natural language capabilities to streamline their client emails.
Perhaps, for some, it’s already performing its role as a ‘virtual assistant’, freeing up brokers for more time to perform ‘higher-value’ tasks.
The bot itself, ChatGPT, warns: “It’s important to note that while ChatGPT can be a valuable tool for captive insurers, it should not replace human expertise and judgement. It should be used as a supportive tool to enhance efficiency and decision-making processes.”
When discussing the potential of ChatGPT to disrupt the industry, Gedge affirms: “Captives run off analytics, and we are increasingly seeing sophisticated use and adoption of new technologies and software. Deriving benefits, and where they will come from, will be dependent on the reliance of the data users input in.”
As Sadler warns: “Due to cybersecurity risks, captive insurers must take measures to secure data and mitigate the risk of human error when inputting data. ChatGPT often delivers the wrong answers and sometimes makes up answers if it doesn’t know them. So, it definitely needs human oversight — for now.”
Sadler reassures his responses were not written by ChatGPT. Neither was this article.