To what percent are humans responsible for extreme hurricanes? What does statistics tell us?

By Indraneel Kasmalkar, PhD Candidate in Computational & Mathematical Engineering (ICME)

... climate change likely increased Harvey’s seven-day rainfall by at least 19 percent ... climate change roughly tripled the odds of a Harvey-type storm.

These are bites from a National Geographic article [1] that highlights the impact of global climate change on Hurricane Harvey, the category 4 storm that wreaked havoc on the coast of Texas in late August 2017. The article cites a couple of papers that conducted these analyses. But how were those researchers able to get quantitive results in the first place? How can we confidently attribute the effects of anthropogenic climate change, a complex global phenomenon, to a specific storm? I would like to dive into one of the papers: "Attributable Human-Induced Changes in the Likelihood and Magnitude of the Observed Extreme Precipitation during Hurricane Harvey" by Mark Risser and Michael Wehner of Lawrence Berkeley National Laboratory [2].

A robust study of the impacts of anthropogenic climate change would require a physics-driven global climate model that could simulate the winds, the oceans, the CO2 emissions, the clouds, not to mention the coast and the terrain. With our current supercomputers we are not able to do these global simulations at high resolution in a reasonable amount of time, and our low resolution results are not accurate enough.

Instead, Risser and Wehner used the approach of extreme value statistics. At its core, this method looks at data concerning the occurrence of extreme events, say earthquakes, and then tries to fit a special curve through it so that we can estimate how likely it is for a large earthquake to occur. There are some concerns about using this approach: statistics generally works well for estimating average values but it is much harder to capture the extremes, partly because they are so rare and spread out, and especially in the case of predicting extreme events that have never occurred before. In fact, Stuart Coles, who developed the statistical model that Risser and Wehner used in their study, writes in his book [3]:

It is easy to be cynical about this strategy, arguing that extrapolation of models to unseen levels requires a leap of faith, even if the models have an underlying asymptotic rationale. There is no simple defense against this criticism, except to say that applications demand extrapolation, and that it is better to use techniques that have a rationale of some sort.

If we accept the premise of extreme value statistics, the next step is to acquire a data set on which this model can be applied. Risser and Wehner decided to use daily weather station measurement data in the Houston, Texas area, obtained from the Global Historical Climatology Network (GHCN). They had to clean up some of the data and choose a subset of stations to streamline the data set. But the biggest caveat is that there is no data prior to 1950. Is seventy years of weather station data enough to identify the contribution of human-induced climate change to a specific storm in Houston? Ideally I would have liked to see data from the start of the industrial revolution to truly capture the effects of CO2 emissions on global climate. But since our historical weather records do not go so far back in time, we are restricted to the 1950-2017 data set.

Now that we have a model and a data set, we need something to isolate the effects of human activity and natural activity. After all, Harvey's extreme devastation could simply be the result of the forces of nature alone. What could we use to separate the human and natural components? Risser and Wehner addressed this issue by using 1. CO2 levels, and 2. El Niño wind and sea temperature data. In particular, they took a time series data set of seasonally averaged global CO2 levels, and annually averaged values of the El Niño Southern Ocean Index (ENSO). The idea is that CO2 levels are a proxy for human activity, while the ENSO would capture natural variations in the climate. And this is an idea where skepticism would be very healthy. Can El Niño measurements truly capture all the natural variation in global climate? Is this approach good enough to account for all the complex non-linear ways in which CO2 levels can affect ocean temperatures, which in turn affect precipitation? But collecting enough data to be able to capture human and natural activities is hard. The proxies that Risser and Wehner have used are simple and straightforward, and that makes this analysis easy and transparent. My overall opinion is that there is a good theoretical idea here, but the numbers that come out from this study should not be taken literally unless there are a multitude of other studies that use similar approaches on varied data sets to yield the same results.

Nevertheless, if we agree to use CO2 levels and ENSO values, then we can use standard statistical procedures such as regression to isolate the contribution of one data series on another. In this case, we would end up with something akin to rainfall estimates for Houston, Texas along with two dials: The CO2 levels and the ENSO values. We could then dial the CO2 level up or down to see how estimates for extreme rainfall would vary.

With the ability to statistically isolate the effects of CO2, Risser and Wehner simply compared the model with 2017 CO2 levels to that with 1950 CO2 levels. For example, you can look at the model and figure out the probability of getting as extreme an event as Hurricane Harvey. If you keep this probability value fixed, but dial down to CO2 levels of 1950 you then get a storm with smaller rainfall values. How small? Roughly 19%. This is where the number in the original articles comes from: "... climate change likely increased Harvey’s seven-day rainfall by at least 19 percent."

To summarize, there are a lot of assumptions and modeling choices that go into this study which merit caution: Extreme value statistics is something to be used carefully at all times. Furthermore, weather data from 1950-2017 may not be enough to make strong statements about climate change. And isolating human and natural activities with CO2 levels and El Niño values may be too narrow of an approach.

It must be said that these statistical approaches have good, simple and transparent ideas. But in the end there is immense complexity in the global climate system. And we must acknowledge that this complexity may significantly obscure the meaning of the results that we get from statistical approaches.


[1] National Geographic. 2017. Climate Change likely super-sized Hurricane Harvey.

[2] Risser, M. D., & Wehner, M. F. (2017). Attributable human-induced changes in the likelihood and magnitude of the observed extreme precipitation during Hurricane Harvey. Geophysical Research Letters, 44.

[3] Coles, S. 2001. An Introduction to Statistical Modeling of Extreme Values. Springer-Verlag London Limited.

Feature image from Vox: