Close this search box.

Compound Wind and Flood Events Collaborative Workshop Summary




Compound Wind and Flood events collaborative Workshop Summary (14th March 2022)

Dr Hannah Bloomfield

This workshop was jointly hosted at the University of Reading by researchers from Reading, Bristol and Loughborough. The workshop had the dual aim of promoting engagement and uptake of recent academic work on European wind and flood hazards within the insurance industry and allowing the scientific community to discuss the potential outputs of three recently funded projects: The Centre for Greening Finance and Investment (CGFI), STORMY-WEATHER1, and ROBUST2 all which are scoped in part around compound wind and flood events over Europe. 

The morning sessions consisted of science talks from ongoing and existing projects related to this topic, followed by a series of project overview talks from project leaders (see schedule and available presentation content). The afternoon started with a pannel session where industry colleagues gave their perspectives on the work that had been discussed and highlighted some key areas of interest within the projects, and for future work. Key ideas from this discussion are outlined below. 

Compound can mean many things 

A number of studies across the UK have found correlation between flood and wind events on various timescales, using a range of metrics to define the perils. There are several different aspects to consider for wind-flood compound events. The traditional view of co-occurring flood and wind hazards at a location is important when thinking about property damage within a ‘short’ hour’s clause or named event. However, from a Reinsurance perspective the cumulative damage for your spatially aggregated assets over a year is of greater interest. The potential for reoccurring single hazards at the same location is also important as it raises questions about the affordability of premiums for property owners. Temporal clustering of storms was also highlighted as important to understand but challenging to relate to damage. As an example, the winter of 2013-14 had several clustered storms, but relatively low losses compared to the winter of 1989-90 which had large recorded losses but only around 4 large storms. The ability to place these two different winters in a 1 in 100-year context is an interesting challenge. 

New high-resolution datasets show a lot of promise 

There was enthusiasm from the insurance sector on results presented using the latest generation of UK Climate Projection Runs (UKCP18) which operate at significantly higher spatial resolution than traditional climate models (with the finest resolution UK-only UKCP domain being 2.2km spatial resolution). The ability of these models to represent the severe storms and possible sting jet events (which might have a higher percentage damage ratio than traditional storms) was noted as very useful, as sting jets are currently not considered within catastrophe models. The UKCP18 2.2km simulations were also highlighted as great tools to delve further into the storm-heavy winters to understand their meteorological drivers. 

Useful questions still to answer within the observed period 

Climate change is at the forefront of our minds, and one point of interest that was flagged was whether evidence of climate change is already present in our historical period data (nominally 1950-present, as currently available in reanalysis datasets). For example, scientists could show if there has there already been a change on decadal timescales in the frequency and intensity of compound 

flood/wind events, which would motivate a more forward-looking view of risk. Understanding if the risk of a combined flood/wind event increases throughout the winter season is also a key topic for exploration. There has also been a large focus from all the projects discussed on pluvial and fluvial flooding, yet groundwater and coastal flooding are also of great interest. 

Understanding driving processes behind correlations is key 

Multiple projects have begun to look at correlations between extreme flood and wind events on various timescales from hourly to seasonally. One morning presentation showed values of GB-wide correlations, but within this there was interest to unpack the regions in which correlation is highest and if there are meteorological processes that lead to the strength of correlations at various timescales. For example, individual or clusters of windstorms could be predominantly responsible for daily to weekly-scale correlations, whereas catchment saturation or large-scale teleconnections could be responsible for monthly to seasonal correlations. With ensemble climate model simulations unpacking these antecedent conditions should be possible. However, problems with finding long-term records of accurate soil moisture data were flagged as a potential issue when thinking about catchment wetness. 

We need to understand future climate change impacts 

Catastrophe models used in the insurance industry tend to be ‘backward looking’ when considering natural hazards and are based on large event sets from historical climate model data. This is due to insurance firms generally taking a 1-year view on risk, with Reinsurance taking a ~5-year view. Historical data is only useful to inform what is happening presently and creating a moving view of risk with time is very challenging. 

The recent Climate Biennial Exploratory Scenario (CBES) exercise has required financial institutions to undertake a stress testing exercise to demonstrate how they are incorporating future climate and transition risk. Climate change is now becoming relevant for financial institutions providing mortgages on ~25-year time horizons. Sharing results on the impacts of climate change on the individual and compound meteorological hazards is key to promoting the development of models with forward-looking risk. 

The Columbia university Hurricane risk model was highlighted as a particuarly useful academically developed tool when considering future climate risks as it can run with a flexible baseline, depending on whether your interest is in historical or future risk. 

Short observational records continue to be a challenge. 

Within the natural catastrophe risk modelling industry there are numerous good models for assessing individual hazards. However, there are currently large unknowns about the correlation between pairs of risks, particuarly with respect to the tails of distributions. This is mostly due to the short observational records available to do analysis. The creation of tailored event sets for particular hazard pairs (in this case wind and inland flooding) is therefore very useful and could provide information on currently underestimated tail risks. 

Uncertainty quantification is important. 

Scientists and catastrophe model developers are very aware of their modelling limitations and conduct rigorous uncertainty analysis. However, when there is a need for these results to be presented at board level, the type of uncertainty present in the extremes of a distribution are not 

immediately clear. It therefore becomes important to find a way to clearly and simply communicate the uncertainty around statistics. For example, a 1 in 200-year return period event created with only 200 years of data will contain a large amount of uncertainty, and this must be conveyed. It was generally thought this information was useful rather than confusing to be presented at this level, as in an ideal world uncertainty information would always be available, and underwriters would include uncertainty within risk estimates. 

Collaboration is key 

It can take several years from scientific research to be created, published and then picked up by the industrial community. From then it takes yet more time to translate this into something that can be used within industry. For quick action related to these compound hazards industry must be involved and integrated within the scientific development process to increase both the visibility and usefulness of the final scientific outputs. 

Working with scientific groups also gives confidence to the quality of models that are produced, particuarly if results from high impact papers can be reproduced and implemented internally. Some examples of successful ongoing and past collaborations were discussed, for example the use of the PRIMAVERA windstorm event set within AON to give a view of risk to benchmark their models against. 

An interesting point for future investigation was how scientific metrics can maintain their integrity (i.e., be statistically significant and including large enough samples of data) but also be damage relevant. An example used was the 99.8th percentile of wind speed, which will give the two windiest days of the year. In many cases this second windiest day will not cause loss or damage in a lot of places, and the damaging events are much rarer than the levels of the percentiles used in academia (e.g., the 90th, 95th or 98th percentile). Similar results are seen for flood risk where flood defenses are designed to handle 1 in 100-year events (i.e., the 99.99997th percentile). Future work should find this balance between running out of data and providing useful damage assessments. Design lifetimes (the change in the 99th percentile in a 25-year period) were suggested as a possible collaborative metric. 




1 Grant Number NE/V004166/1

2 Grant Number NE/V018698/1