Article img

In the AI of the beholder

Published by:
Topics: AI in Insurance InsurTech

Generative AI has been viewed through a somewhat negative lens to date, but with growing demand in the London market for AI conversant talent, a measured approach to establishing the merits of AI implementation is needed, writes Mark Wilding...

The role artificial intelligence will play in the insurance sector has hitherto been viewed somewhat negatively; the perception is that generative AI will heavily impact the number of underwriting jobs and drastically change roles of any underwriters not supplanted by technology. The speed and extent to which AI will affect the insurance sector is likely to be much less dramatic.

While fears expressed last year that four out of five underwriting jobs could be impacted by AI have so far proved to be exaggerated, some checks and balances around AI implementation are undoubtedly required.

The London insurance market stereotype of a conservative attitude towards innovation has often put it behind the curve for technology, but a considered approach to modernisation via generative AI is likely to be warranted. 

Uncertainty for firms about the value of AI in insurance could be due to several things: a general lack of adoption in the sector to date; a failure to successfully implement solutions by those businesses who have ventured into AI; or the belief that there is simply no need for AI tools.

In parallel, those companies which are keen to leverage the power of AI to transform their business may not have fully considered issues around the security of the technology, the reliability of outputs, and the cost and technical challenges of adopting AI.

Rather than adopting AI wholesale, the industry is more likely to experiment with specific use cases to assess how well AI performs, before slowly expanding implementation as the utility and reliability of these tools is established.

AI solutions should be approached with the company’s architecture in mind. Pilots will focus on specific use cases but there is minimal benefit to the organisation if the solution can’t be scaled and implemented holistically to address the wider, balanced, needs of people, processes, and technology. It is easy to invest in costly silos where the solution is duplicated elsewhere or forgotten and not realised.

Integration is key here. The insurance sector has a reputation for reliance on legacy IT systems, requiring multiple logins and duplicate processes because of poor integration or none. AI tool adoption needs to provide a seamless user experience, rather than a disjointed approach that could be considered onerous on users to function correctly. The more integrated the solution, the faster it will be adopted by users to start delivering benefits.

Another key consideration is addressing the outcomes of AI implementation. Avoiding costly mistakes requires some forethought on what challenges AI will solve. What pressing need and use case does the business have. Would users be supported to work more efficiently and creatively or is implementation purely for the prestige of saying something is now ‘AI-enabled’. For all the interest in large language models currently, how much analysis is being done into where, how and why they will be used; and if implemented, will the user experience change significantly? 

Would current systems produce the same (or better) outcomes without the attachment of AI?

Businesses also need to consider who they are doing it for. When the Ishikawa diagram was first developed, it presented the root causes of problems in manufacturing processes. This cause-and-effect approach can be used to analyse the need for AI within an organisation and identify where it would provide benefit. Is the benefit only internal (e.g. for Underwriting, Claims, Ops, IT, or MI), or external – delivering a better service to intermediaries and clients?

There is also the need to consider what happens with the outputs. If enriching data, once in receipt what will it be used for? If business decisions are not being made and it’s simply being stored, what justification is there for producing it? 

The final question businesses should be asking themselves is whether they have sufficient guard rails in place to facilitate implementation. There have been several high-profile cases of companies inadvertently supplying proprietary information and/or trade secrets to open models through lack of understanding by users. 

It’s notable that when Microsoft introduced a new AI tool; it called the chatbot ‘Copilot’, with the stated aim of the AI providing support for the user, rather than replacing them. 

The concept of artificial general intelligence – AI that can operate independently of human interaction – is exactly that: a hypothetical concept rather than a current reality. Generative AI is not yet capable of replacing human interaction; it’s there to provide support and requires considered prompts and inputs to produce the required output. 

Judging by the experiences of users over the past year, AI is some way from being error-free. For a business to be able to rely on the responses given by a model, it must be confident there is minimal risk of factual inaccuracies, biased or prejudiced outputs, plagiarism, all the while avoiding decision making on ‘hallucinatory’ outputs.

Adopters may need to have a rigorous system of checks and balances in place, where the guard rails are set relatively high before they are comfortable a happy medium of trust has been reached between what the AI is providing and what it promised to deliver.

Mark Wilding, Digital Product Development Specialist, AEGIS London

See more
See less
Share fluctuations
Sompo
31.0
USD
-3.2%
Tokio Marine
30.2
USD
-3.1%
MS&AD
26.5
USD
-2.5%
Hannover Re
43.4
USD
-1.6%
IGI
12.5
USD
-1%
Ryan Specialty
54.0
USD
-0.7%
WTW
272.0
USD
-0.6%
Truist
37.2
USD
-0.6%
Brown & Brown
84.9
USD
-0.4%
AXA
36.5
USD
-0.4%
QBE
11.3
USD
-0.4%
RenaissanceRe
24.8
USD
0%
See more
See less
Upcoming events