Nikolai - stock.adobe.com
Munich Re sees strong growth in AI insurance
Global reinsurance giant Munich Re expects more demand for AI insurance from organisations that are looking to manage the risks of AI as they experiment more with the technology
Artificial intelligence (AI) promises to supercharge productivity, improve customer experience and drive new business models, but the limitations and risks that come with technology have also come under the spotlight.
Earlier this year, cloud-based financial and HR software provider Workday was accused of facilitating hiring biases through the AI screening tool in its software in a lawsuit, while UnitedHealth faced allegations that its algorithm was denying patient claims in a class-action lawsuit late last year.
Amid the backlash, AI technology suppliers have started offering copyright shields while others are indemnifying their models for enterprise use to assuage customer concerns. However, some are also shunning away from responsibility with iron-clad disclaimers.
At a time when AI players are on different sides of the risk mitigation pendulum, Munich Re is trying to arm the industry with the good-old safety net of insurance. The company insured the first AI performance risk in 2018 and started to do so for large language models (LLMs) in 2019.
In an interview with Computer Weekly, Michael Berger, head of Insure AI at Munich Re talks up the evolution of the company’s insurance offerings for AI, how it assesses risks and the growth potential in covering legal risks arising from the use and development of AI tools.
Generative AI is still relatively new and full of debate – isn’t it risky to venture into this area? Also, is Insure AI for AI makers or users?
Michael Berger: We’ve invested in early efforts to combine deep domain knowledge with modern technology expertise, and we’re committed to working with our clients to unlock digital business opportunities and drive progress through AI.
That said, our customers need to understand and manage the opportunities and associated risks of AI. Our first insurance solution for AI applications was created in 2018, covering financial losses stemming from AI underperformance, followed by our debut cover for a LLM in 2019.
Today, we are exploring solutions to cover the various ways GenAI could potentially and randomly go wrong. We offer protection to AI model developers and users.
How easy is it to handle ‘risk assessment’, given the black box problem of AI?
Berger: Munich Re’s AI team follows a proven technical due diligence process to assess the risks of an AI model.
And that most models are not deterministic but probabilistic?
Berger: Every AI system, including generative AI, is a probabilistic system. We are familiar with modelling their risks based on our experience with reinsurance, where we apply the fluctuation of the loss-ratio based on the pricing or risk assessment of the primary insurer.
Our Insure AI solutions expand on this idea from reinsurance and transfer it to AI areas where new statistical models are used. This process fundamentally requires co-operation and transparency on the part of the customer.
Michael Berger, Munich Re
With this approach, Munich Re is able to determine the predictive robustness of the AI, quantifying, for example, the probability and severity of model underperformance. The insurance premium is calculated based on the robustness of the AI model.
New risks require new insurance solutions based on expertise and experience already gained in other business fields. Our risk assessment and pricing approach works for any black box model.
Hallucinations, bias, privacy cases and false data – which one is the toughest to assess and insure for? Do you also cover synthetic data? And prompt injections?
Berger: We regard the hallucination of generative AI as an error, a type of risk we are familiar with from other AI applications we’ve been insuring since 2018. However, even if this hallucination risk does not pose a new challenge conceptually, generative AI requires more detailed analyses than conventional AI systems.
In addition to the risk of error with AI, there are other risks that we consider insurable. In the case of generative AI, we are looking at the risk of copyright infringement and discrimination. For both scenarios, we are currently cooperating with clients to structure specific insurance solutions.
How does this work?
Berger: For example, for discrimination, my team translates the risk of discrimination into an error ratio, i.e. defined as an error that an AI makes in relation to a type of fairness or discrimination metric.
What are the chances – and implications on insurance – of AI companies placing liability on users with some form of force majeure loophole and disclaimers, as Microsoft has tried with Bing?
Berger: We wanted to learn more about the general trend in legal disputes in order to draw conclusions for our offerings. Our analysis shows that AI lawsuits have steadily increased. The reasons for the lawsuits have also diversified, hitting AI models across the board and impacting many industries.
We see growth potential in covering the legal risks arising from the use of AI tools. For example, job applicants who feel discriminated in the selection process could take legal action against the hiring organisation for an AI-supported decision.
Will regulations and compliance fuel the demand for AI insurance?
Berger: AI regulation will likely incentivise companies to follow evolving guidelines and implement responsible AI initiatives. Markets will have to evolve, navigate through compliance phases and find standardised processes to meet regulatory requirements. We expect to see a transition to informed, standardised market practice, which has happened for other now established lines of insurance in areas like cyber security.
Are some industries more prone to AI pitfalls than others, like legal, banking and financial services, and healthcare? Would you design your offerings to cater to their needs?
Berger: Our Insure AI team sees AI as a very interesting growth area as more organisations across industries experiment with AI to support decision-making or automate certain decisions and processes.
This means that organisations need to be able to rely on the output and accuracy of AI models. And yet there remains an inherent uncertainty of error for everyone, which is naturally inherent in any AI model. As risk is the source of our business model, Munich Re sees AI insurance as a strong growth area with a lot of potential.
Read more about AI in APAC
- The Australian government is experimenting with AI use cases in a safe environment while it figures out ways to harness the technology to benefit citizens and businesses.
- DBS Bank is building a strong data foundation and upskilling employees on data and artificial intelligence to realise its vision of becoming an AI-fuelled bank.
- Boomi CEO talks up the company’s efforts to build up an AI agent architecture, its upcoming AI capabilities, and its footprint in the Asia-Pacific region.
- Google has updated Gemini 1.5 Pro with a two-million-token context window and debuted a smaller, lightweight model optimised for high-frequency, specialised tasks.