Schedule a Demo

Business Analytics

Actuaries and Data Overload: An Insurance Use Case

< Back to Blog
 
January 31, 2018

The current low interest rate environment is affecting the bottom line of insurance, threatening income from rate-sensitive products and investments and in turn putting increased pressure on underwriting accuracy to make up the difference. Actuaries are now expected to provide more personalized and accurate policies using information gleaned from of a host of new sources of data, many of which are unstructured or lacking consistency.

Gathering and analyzing an abundance of data from disparate sources is a challenge. However,  when done properly, the effort results in better risk pricing and better identification of key drivers.

A Data Wrangling Challenge: Actuaries’ Disparate Data Sources

In the last 15 years, the world has seen an explosion of new digital data sources, from governmental behavioral insights to the Internet of Things. Actuaries are faced with the very real challenge of managing many data sources and wrangling that disparate data to inform their work. There are several sources of behavioral and personal data that now drive analysis and predictive modeling.

  • Third-party Sources: Insurers can use information like credit scores to better assess risk, after empirical evidence showed a strong corollary between credit scores and driver safety ratings.
  • Geographical and Demographic Information: Properly combining the right data on geographic and demographic information can now be used to better predict risk of car accidents, disease, or workplace safety issues for policy holders.
  • Medical Data and Health Trends: In addition to looking at prior claims made for various medical conditions, actuaries now have access to data on treatments available in distinct geographic areas. Combined, actuaries can tell who has a higher risk of disease and less access to care for it, and adjust the policy according to this more accurate risk assessment.
  • Government-sourced Behavioral Data: The federal government collects thousands of data points on its constituents, from the number of members of the household to life satisfaction to risk behaviors exhibited (i.e. smoking, not wearing seat belts, etc.). Gathering data on these factors over time can lead to more predictive analysis for potential new policy holders.
  • Future Scenario Modeling: Actuaries can now use complex modeling to examine the probability of natural catastrophes versus the potential intensity of those disasters in certain areas, helping them predict why a natural disaster in one region can result in thousands more deaths than the same event in a different area.

Trend Toward New Technologies

With all of these new data sources providing both structured and unstructured data, actuaries need to find an efficient way to wrangle larger amounts of data. Traditional spreadsheet tools like Excel present a number of data wrangling challenges and are no longer the best tool for the job. Spreadsheets are not robust enough to handle the inconsistencies or the volume of data coming in, and errors are more likely to occur due to the varying data sources. The traditional spreadsheet process is also labor-intensive and slow because the manual work isn’t repeatable when new data comes in. In short, the overwhelmed actuary’s needs are worsened by the limitations of these traditional data management tools.

The Trifacta Solution

Thankfully, Trifacta is designed to easily aggregate, clean, validate, and visualize multiple data sets for seamless collaboration between actuarial colleagues. Better understanding of the data means better risk models and ultimately better policies for the customer. With Trifacta, actuaries can produce more accurate policies, speed consumer insights, and differentiate in the market, all while decreasing their production time and costs.

For more about how Trifacta drives deeper insights in the insurance industry, read our brief: Insurance Claims Analytics Solution Brief