How suitable is quantile mapping for post-processing GCM precipitation forecasts?

Contributed by QJ Wang, University of Melbourne*

Poster session during the HEPEX Workshop at SMHI in Norrköping, Sweden.

Back in September 2015 at the highly successful HEPEX Seasonal Hydrological Forecasting Workshop at SMHI in Norrkoping, Sweden, I heard a number of presentations and saw posters on the use of quantile mapping for post-processing or downscaling GCM precipitation forecasts.

While quantile mapping was well known to be highly effective in bias correction, I was concerned that some of its limitations might not have been apparent to some people.

After discussing with Andy Wood and Maria-Helena Ramos at the workshop, I left the workshop with the idea of doing a piece of work to demonstrate both the effectiveness and limitations of the quantile mapping method.

Back in Melbourne, my colleagues at CSIRO, led by Tony Zhao and James Bennett, enthusiastically took on the research task. With inputs also from Andy and Helena, we recently published the results in Journal of Climate. The paper is simply titled: How suitable is quantile mapping for post-processing GCM precipitation forecasts? [read more=”Read the abstract” less=”Read less”] GCMs are used by many national weather services to produce seasonal outlooks of atmospheric and oceanic conditions and fluxes. Post-processing is often a necessary step before GCM forecasts can be applied in practice. Quantile mapping (QM) is rapidly becoming the method of choice by operational agencies to post-process raw GCM outputs. We investigate whether QM is appropriate for this task. Ensemble forecast post-processing methods should aim to i) correct bias, ii) ensure forecasts are reliable in ensemble spread, and iii) guarantee forecasts are at least as skillful as climatology, a property called ‘coherence’. In this study, we evaluate the effectiveness of QM in achieving these aims by applying it to precipitation forecasts from the POAMA model. We show that while QM is highly effective in correcting bias, it cannot ensure reliability in forecast ensemble spread, nor guarantee coherence. This is because QM ignores the correlation between raw ensemble forecasts and observations. When raw forecasts are not significantly positively correlated with observations, QM tends to produce negatively skillful forecasts. Even when there is significant positive correlation, QM cannot ensure reliability and coherence for post-processed forecasts. We conclude that QM is not a fully satisfactory method for post-processing forecasts where the issues of bias, reliability and coherence pre-exist. Alternative post-processing methods based on ensemble model output statistics (EMOS) are available that achieve not only unbiased but also reliable and coherent forecasts. We show this with one such alternative, the Bayesian Joint Probability modelling approach. [/read]

In brief, quantile mapping is shown, as previously known, to be highly effective for bias correction. However, if there is still an ensemble spread reliability problem after bias correction, quantile mapping cannot fix up the problem. Think of a limiting case. The ensemble members are all identical and therefore have no spread and we know the forecast is not perfect. Applying quantile mapping will not introduce even an ounce of spread to the ensemble.

When past forecasts are not correlated with observations in any way for a particular situation, we should not be insisting on using the erroneous forecasts, and should instead revert to climatology forecasts. Take this point further with another limiting case. Past forecasts are found to be negatively correlated with observations for a particular situation. Applying quantile mapping cannot reverse or remove the negative correlation, as the order of values after quantile mapping does not change.

The good news is that there are alternatives to the quantile mapping method. In the paper, we demonstrated that the Bayesian joint probability method was effective in achieving bias correction, making forecast ensemble spread reliable, and steering the forecasts to climatology when the raw forecasts had no underlying skill.

If you are interested in the topic, take a look at the paper here. The authors would love to hear your thoughts.

Finally, may I use this opportunity to let you know that I recently made a career change by joining the University of Melbourne as a professor of hydrological forecasting. Leaving the fantastic water forecasting team in CSIRO that I built and loved was one of the most difficult decisions I had to make. I look forward to continuing to work with my colleagues at CSIRO and indeed the HEPEX community to advance research and practice of ensemble hydrological forecasting. My new email contact is quan.wang@unimelb.edu.au.

* with inputs from Tony Zhao, James Bennett, Andrew Schepen, Andy Wood, David Robertson, and Maria-Helena Ramos, after several discussions in the past months.

2 comments

  1. This is indeed very interesting. We demonstrated that QM can provide consistent and significant CC impacts on water resources in alpine regions if long term means and variability is corrected. Would be great to discuss this further.
    Best regards, David

    1. Thanks for your comment David. Our study is most relevant to forecasting applications – the findings don’t really apply to climate change applications. Forecast calibration methods (like the BJP) require observations to be synchronous with retrospective forecasts, which isn’t the case with climate projections. In a long-range climate change simulation, you don’t expect, e.g., January 2010 rainfall over a particular grid cell to look exactly like the observation for that month/location (because climate projections are typically initialised decades earlier, and then allowed to freely evolve). But for a rainfall forecast, we do expect January 2010 rainfall to predict the observation accurately, and we can use the correlation (or lack thereof) of forecasts with observations for calibration. What we do expect with climate change simulations is that, say, the 1960-1990 simulated climate matches the observed climate. If it does not, QM is very good at correcting biases, so for climate change applications it is a sensible choice – we discuss this a little in the paper. In forecasting we have more information with which to correct our outputs (correlation of forecasts with obs), and our argument is that we should use this information.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.