Insurtech, or insurance tech, is the newest trend to hit London and it is booming. Investment in UK insurtech companies was at £218m in the first half of 2017, according to Accenture and CB Insights, up from a mere £7.8m in the same period the previous year.

We see a number of leading carriers, particularly personal lines insurers, pursuing innovation globally, exploring new ideas, partnering up with digitally-geared vendors or startups, and experimenting with new business and operating models. 

These strategic efforts are accompanied by awareness of the huge potential of AI, which has passed the proof-of-concept phase. A raft of InsurTech startups with AI at the heart of their offerings are flooding into the sector, and the consolidation trends show that traditional insurers are slowly but surely moving towards an AI-driven future. 

Yet in a highly regulated industry like insurance, innovation around the use of AI raises several issues, including questions of compliance in terms of reliability, liability and auditability. Watchdogs are taking up the mantle on AI algorithm usage in insurance, particularly in relation to how insurers explain their pricing policies to ensure customers are treated fairly.

Underwriting challenges and increasing adoption of AI

Operational costs remain high, with insurers just keeping their heads above water in terms of profitability maintenance. This has largely resulted from process issues – an under-developed area in the insurance industry – with specific issues around how underwriters and brokers obtain, collect and process data. 

Overwhelmingly, the tasks involved continue to be performed manually. Underwriters at almost every traditional insurance company are inundated with paper files. Data is scattered everywhere in clunky, disparate legacy systems, and the same data are re-entered several times at different stages of a policy lifecycle. 

This is a nightmare for an industry that seeks time and cost savings. While process issues are less prominent in personal lines, they are a huge concern for commercial lines with various, complex risk classes. Incompatible legacy systems mean that data entry may take longer and involve costly duplication and entry errors. 

More critically, inaccurate or absent data leads to poor pricing decisions and underwriting processes, resulting in inadequate risk assessment and negative consequences for the capital reserve. Intuitively, a good rating or pricing model will incorporate different types of information and variables to differentiate the risk and price it accurately, to outperform competitors and avoid adverse selection of risk. 

Arguably, the utilisation of data analysis to aid underwriters is not new to the industry, but leveraging AI to analyse different types and volumes of data in a shorter time is undeniably advantageous.

“AI-enabled RPA software that replicates the actions of humans in order to run business processes has a magical effect on efficiency in the underwriting process”

According to a study conducted by GlobalData, global insurers have invested most heavily in the AI areas of robotic process automation (RPA, 42%) and machine learning (ML, 46%), as well as robo-advisor/chatbots. This adoption stems from the huge potential of AI to fill gaps specific to the insurance industry, narrowing the protection gaps in some emerging markets, and covering new rising risks in developed markets while reaping the rewards of digitalisation and “side-step compounded tangle” systems. 

AI-enabled RPA software that replicates the actions of humans in order to run business processes has a magical effect on efficiency in the underwriting process. It has the ability to self-learn and make complex decisions on risk pricing cases. 

AI capabilities such as natural language processing, text analysis, predictive analytics, ML and cognitive reasoning can extract relevant information, deal with semi and unstructured data and, once trained, can handle highly variable cases and facilitate decision marking.

The quoting process can be completed in a few minutes or even seconds, particularly if the level of confidence is high. An ambitious and unconventional example comes from Laptus, a US-based insurtech that cracked the life insurance market by introducing selfies into the underwriting process. 

The customer sends a selfie and AI-based software takes care of the rest. Laptus claims that this provides a more accurate prediction of life expectancy. Clearly, some Insurtechs have already succeeded in bringing solutions and business models to the market faster than any incumbent. 

Investment Areas in AI by Insurers

Machine Learning

Robotic Process Automation (RPA)

Robo-Advisor / Virtual Agent

Natural Language

Autonomous Vehicles






Source: GlobalData 2017

Large carriers, however, are pursuing innovative ways of enhancing their underwriting performance. AIG has been exceptionally proactive in applying the lessons of the past and moving to place data and analytics at the heart of its SME underwriting division. Recently, in partnership with Hamilton USA, AIG launched Attune, an automated, highly data-driven platform based on AI. 

The development of AI applications is gathering pace across different personal lines. British motor insurer Admiral, for example, is using Facebook Messenger to predict driving behavior. The insurer recently partnered with Vodafone to provide the underlying telematics services. 

Currently, AI is having only a small impact on insurance, but it will pick up pace across the sector. Looking at future risks, we should see this as a positive rather than a disruptive force. 

AI insuring emerging risk: driverless cars and beyond

The greater challenge for the insurance industry lies in the fact that the world is becoming more connected and digital, which in turn changes the nature of risk and affects demand, primarily driven by consumer/business behaviors and technology adoption.

In other words, the risks consumers and business face have changed faster than their insurance policies. 

New business and pricing models are therefore needed to keep up and provide insurance coverage aligned with emerging risks. For instance, the development of autonomous vehicles has dramatic implications for motor insurance, with a predicted 60% drop in premiums in the near future due to the increasing safety, automation, alerts and vehicle avoidance systems. 

Further, motor insurance will no longer be concerned with insuring the vehicles but rather the risks associated with driverless vehicles. These include software failure, manufacturers’ products and other cyber risks such as malfunctions. 

Hence, there has been much discussion in the insurance community around the role and responsibilities of motor insurers in the development of autonomous vehicles.  

How much are insurers involved in understanding the risks associated with autonomous vehicles?  How can data and AI be leveraged to effectively insure autonomous vehicles and support a shift from protection to risk prevention?

Autonomous vehicles will generate a massive amount of data that cannot be handled by current statistical models. We see that insurers will need to deploy AI to underwrite and assess the risk, and to support the development of autonomous vehicles. 

While data ownership is still controversial, eventually insurers will have to enter into new data sharing arrangements with manufacturers; this will possibly be facilitated by regulators. 

There are two particularly important examples of current insurers’ involvement in the development of autonomous vehicles. XL Catlin, one of the largest insurers, is involved in DRIVEN, a joint consortium led by Oxbotica, an Oxford-based company that specialises in AI. 

Another is AXA, involved in a British government-funded autonomous vehicle initiative. During the trials, its underwriting team, led by David Williams, will be seeking to assess risks and evaluate the safety of autonomous vehicle technologies for pedestrians and motorists. 

Ultimately, the insurer will advise on insurance coverage issues, particularly those around cyber risk that could result from system hacking.  

Offering cyber insurance is another emerging issue for the insurance sector, how insurers can quantify the risk, as the demand for this type of coverage from businesses grew by 36% in 2016. The reality is that current cyber insurance policies are limited, as underwriting and pricing cyber risk represents a complex new territory for casualty insurers. 

“Relying on historical data to assess cyber risk may be barking up the wrong tree, as the world is moving towards everything digital at breakneck pace”

In the meantime, relying on historical data to assess cyber risk may be barking up the wrong tree, as the world is moving towards everything digital at breakneck pace. We see a huge potential in applying AI techniques and models to cyber insurance, given its ability to predict risk beyond what an actuary or underwriter is capable of doing. 

On the other hand, natural disaster risk management remains under-exploited among insurers, with recent catastrophic events across the globe, such as Hurricanes Harvey, Matthew and Sandy, causing huge losses for insurers. 

How should they move forward to incorporate new data and analytics into catastrophic modeling? How can they navigate new business models, such as parametric insurance, to narrow the protection gap and allow them to expand into developing markets? 

Clearly, many questions remain, but as AI technologies rapidly expand, it becomes vital for insurers to keep up with industry trends to avoid becoming obsolete. Open data sources, data sharing tools and satellite imagery technologies coupled with AI are a marriage made in heaven for natural disaster modelling; companies like IBM can bridge the gap, offering real-time weather data and analysis by leveraging AI and cognitive abilities. 

Another notable example comes from the start-up Aerobatics, which is applying drone technology to crop insurance, using drone and 3D mapping to check the condition of plants, and connected devices, sensors and mapping technologies to examine flood risk. 

Regulators fretting about the perils of AI in insurance

The Financial Stability Body (FSB) published a report in 2017 highlighting the implications of AI and ML for insurance and financial services in general. 

The first of the threats discussed is the evolution of the AI and machine market structure in insurance. The industry is increasingly dependent on third-parties/firms that provide AI and ML solutions. 

Although the use of these platforms and solutions offers great benefits in terms of efficiency, scalability, and effectiveness, the real threat comes from the fact that these firms are not bound by regulations. In particular, if those big technology firms come to own a large share of the insurance market, this could result in a monopoly or oligopoly.

Second, the report noted the paradoxical nature of AI and ML. For instance, AI and ML can be used to reduce the number of claims, to streamline the underwriting process, and to achieve accurate risk assessment and enhanced pricing models. 

Yet, the use of ML algorithms could lead to regulatory and societal challenges in terms of customer protections, discrimination or prejudice. In other words, AI and ML are used to personalise policies based on detailed analysis of shared customer data. 

“The use of ML algorithms could lead to regulatory and societal challenges in terms of customer protections, discrimination or prejudice”

While this seems to offer better understanding of customers’ needs, more affordable coverages and improved customer service, it might lead to adverse risk selections, where insurance become unaffordable. 

Third, the FSB flagged the lack of interoperability and auditability of these models that have to be generated from training data sets. As these models have not been trained to respond to future pressure from financial crises, they may not be capable of sustaining long-term risk management. 

FBS will sooner require insurance firms to clarify the ambiguities and explain the rationale behind models developed by AI. FBS advises insurance firms to assess and monitor the promise and the perils of these AI systems, and explore examples of success and failure in their adoption.  

The report noted that: “As with any new product or service, it will be important to assess uses of AI and machine learning in view of their risks, including adherence to relevant protocols on data privacy, conduct risks, and cybersecurity. 

“Adequate testing and ‘training’ of tools with unbiased data and feedback mechanisms is important to ensure applications do what they are intended to do.”

Share this article