The news: Swiss insurance firm Zurich is reportedly experimenting with ChatGPT to find out how AI can help with tasks including modeling, claims, and data mining, according to the Financial Times.
- Zurich believes the technology can be used to extract information from long documents, including claims descriptions.
- It’s aiming to improve its underwriting by inputting claims data from the previous six years to try pinpointing the cause of loss across large numbers of claims.
- The insurer is also exploring if AI can help write code for statistical models.
AI can shake up insurance: Here are five potential benefits the tech could have on the sector.
- Better fraud detection. AI models can be used to detect fraud by combing through data to detect claims that don't fall in line with expected behavior. Companies using AI to combat insurance fraud are already at an all-time high.
- Claims prevention. Insurers can use AI to simulate various scenarios and better predict risks. This helps them identify potential future claims and take steps to deter them.
- Efficiency gains. AI can ease insurers’ staffing requirements by cutting the need for as much manual input and data analysis, in theory cutting costs. And its ability to aid claims clerks in gathering information and verifying documents can speed up processing times, improving the customer experience.
- Stronger customer service. AI-powered chatbots can give personalized responses to complex customer questions in less time, boosting satisfaction. The tech can also help human assistants provide a better service by quickly searching large databases for answers to FAQs and information on the status of claims or policy coverage.
- It should improve with time. Zurich’s testing of generative AI should identify where it can have the biggest impact and any potential pitfalls. But the tech's self-learning nature means it should also get better as more insurers and financial institutions use it.
Any downside?
- Trust issues. Not all consumers are sold: Less than one-third of US adults trust search results generated by AI. Insurers need to be open and honest with customers about their use of the tech and its potential flaws to remedy this.
- Ethical concerns. Insurtech Lemonade fell into a PR disaster after suggesting it used customers’ facial expressions to detect fraud. Critics have also claimed that using AI tech for facial recognition can lead to racial bias and a lack of transparency.
- Regulatory headaches. AI’s rapid rise means more legislation will be created to police companies’ use of the technology. Insurers must comply with a patchwork of existing state laws and regulations while planning for new rules. It will likely be costly and difficult for firms to keep generative AI in compliance and up to date.
- It won’t work for everything. Generative AI is generally ill-suited to fully explaining its actions, making it inappropriate for making pricing decisions that have to be explained to internal stakeholders and regulators.