page-banner

When will Chat-GPT
write my actuarial report?

Our viewpoint

One day… almost certainly… but, perhaps not as soon as we like to think! And no, Chat-GPT didn’t write this article either! 

Now that we’ve got that out the way, like so many other people, I am fascinated by the capabilities of Chat-GPT and the quantum leap it feels like it offers. But like so many tools, its immediate application is slightly more nuanced than it first seems. 

This article gives some helpful hints on how Chat-GPT, with its current capabilities, can be used to help make actuarial work easier. 

Idea generation and background research 

Chat-GPT is a fantastic idea generator. For this article I had a list of points I wanted to cover, but by asking Chat-GPT for an outline of the article, I got several more angles to consider as well. I have found the same when asking more insurance-specific questions, too. 

Chat-GPT has also been useful for understanding technical material that I’m not familiar with. For example, I used it to assist with understanding the background to the /.Holt vs Allianz motor credit hire case that was recently decided by the high court. Note that as the training data for GPT-4 is as at September 2021, it could give me historical background but couldn’t tell me about the specifics of the most recent judgement. 

Previously I would have needed to read several articles to understand why the judgement mattered, but I was able to get Chat-GPT to explain the key features of the credit hire market via a series of prompts, which were then easy to verify. 

Why is Chat-GPT hard to use for end-to-end drafting? 

I have found Chat-GPT surprisingly frustrating to use when drafting longer documents. It gets tantalisingly close to an acceptable draft, but it still takes a long time for me to take it from “just about” to “just what I want”. This is because: 

  • Reviewing each iteration of a document takes time. Chat-GPT can re-generate an 800 word article in seconds, but it takes much longer to read and check each draft, even when only redrafting one section at a time. 
  • Fact-checking can be really challenging. When we write things ourselves, we make points and use examples from within our knowledge-base. Chat-GPT might make a broader range of points or use more varied examples, but each needs to be checked. This is particularly important, given the much-evidenced phenomenon of Chat-GPT 'hallucinating' facts, for example referencing an article that doesn’t exist. 

Tips for whole report drafting 

The following have helped me get better results when using Chat-GPT to draft longer content:  

  • Giving clear and specific prompts that include context, format, who the audience is, desired structure and tone of voice required. 
  • Breaking the report down: using Chat-GPT to generate an outline (as noted earlier) and then using it to draft sections individually. 
  • Asking Chat-GPT for different perspectives on a topic, before blending together the different responses. 
  • Making iterative improvements, for example by asking Chat-GPT to redraft in a different tone-of-voice, using a revised structure or taking into account additional prompts on a specific part of the subject matter.  
  • Knowing when to stop: It’s tempting to keep iterating in search of perfection. Typically, by around five re-drafts I find it hard to make further improvements via Chat-GPT and am better off editing myself. 

Limitations 

There are several other challenges associated with using Chat-GPT for substantial reporting and drafting tasks. 

First, the training data (for GPT-4) is as at September 2021, so querying current affairs or asking for reports summarising recent market trends may give misleading results.  

Second, as with any AI model, its only as good as the data, and this data may include biased or discriminatory information. There have been previous examples of models learning and then reinforcing the biases of their data or user inputs. 

Third, actuarial analysis and reporting is a niche area, and a low proportion of what there is sits in the public domain. This means that the training data is less relevant, and Chat-GPT will probably perform relatively less well (though not necessarily badly!) on actuarial-specific questions than on more general queries. 

Fourth, when using large language models such as Chat-GPT we have to be careful of our cognitive biases coming into play. We are more likely to trust well written documents or clearly delivered presentations, even though the quality of drafting has little to do with factual accuracy. Chat-GPT always writes clearly and convincingly, even when hallucinating facts – making this bias particularly pertinent! 

Finally, use of Chat-GPT raises questions around data privacy and security. For example, unless you opt out, your input data is used to continue to train the model, and in addition, there remains a risk that any input data may be made public following a data breach. 

So when will Chat-GPT write my actuarial report? 

There are cautionary tales from history to heed here. During 2015-2020 there was enormous hype around the potential for driverless cars. Features of the technology like automatic lane keeping and braking are now mainstream, but the truly driverless car remains elusive – because the technology doesn’t deal well with unusual circumstances.  

I see the same risk in large language models like Chat-GPT. There is no doubt that, used effectively, they will significantly streamline aspects of actuarial reporting. This in turn will free us up to spend more time on the value-adding analysis, as well as providing the essential 'human' expert input to interpret the Chat-GPT output.  

First published in the July 2023 edition of the Actuarial Post.