HEPEX-SIP Topic: Post-processing (2/3)

Contributed by Maria-Helena Ramos, Nathalie Voisin and Jan Verkade

What can we find about post-processors for hydrological prediction in the literature? A small review to be completed by you!

livre-sur-place-2008In hydrologic uncertainty analysis, the Bayesian framework prevails to analytically derive the joint distribution of forecasts and observations. Based on existing prior knowledge and likelihood functions, new data is used to update this prior knowledge and provide a conditional posterior distribution, which summarizes the uncertainty about the variable of interest (i.e., the predictive uncertainty).

The first papers on uncertainty processors for hydrological forecasting proposed Bayesian models to quantify uncertainties on seasonal snowmelt runoff (Krzysztofowicz, 1985, 1999; Krzysztofowicz and Reese, 1991) and short-term river stage forecasts (Krzysztofowicz and Kelly, 2000; Krzysztofowicz and Herr, 2001). The Hydrologic uncertainty processor (HUP) developed was a component of the Hydrologic Bayesian forecasting system (BFS), a framework for producing probabilistic forecasts from a deterministic hydrological model (Krzysztofowicz, 1999). HUP aggregated all sources of hydrologic (model) uncertainty and assumed a lag-1 autoregressive Markov process to model time dependence. Weather forecast (input) uncertainty was not considered in HUP, but treated separately in the BFS, which had also a component to integrate the input and model uncertainties into a predictive distribution (see the review in Krzysztofowicz, 2002).

Since then, several other developments and applications have been proposed to address the challenges of producing skillful and reliable hydrologic predictions in a variety of hydro-climatic conditions. Krzysztofowicz and Maranzano (2004) proposed a multivariate HUP to characterize the stochastic dependence in time of river stage predictions and produce probabilistic forecasts across lead times. In the Model Conditional Processor (MCP) method, the inclusion of marginal distributions of parameter uncertainty in the Bayesian analysis of uncertainty was investigated (Todini, 2008). MCP model is proposed as an extension of the HUP postprocessor allowing users to better explore various deterministic models in a multivariate Normal framework. A multi-temporal approach of the MCP method has been recently proposed by Coccia (2011) to model the predictive uncertainty across lead times. A multivariate Normal distribution analysis is also used in the General linear model post-processor (GLMPP) procedure (Zhao et al., 2011) to remove biases while preserving temporal scale dependency relationships in streamflow hydrographs.

The HUP methodology was also extended to real-time operational systems for water level and medium-range ensemble flow forecasting in the River Rhine (Reggiani and Weerts, 2008; Reggiani et al., 2009). The authors discuss the impact of the prior distributions and likelihood functions adopted in Bayesian processors, calling attention to the benefits of using all information available at the time of forecasting (including upstream observations) to better estimate the predictive uncertainty.

In many postprocessors, it is often the simulated river flows (or the non-corrected forecasts) that are used as predictor, but other explanatory variables may also be included, such as preceding errors, antecedent rainfall accumulations, storm types or weather classes (eg., England et al., 2010; Montanari and Grossi, 2008). A multi-predictor approach is also investigated within the MCP model (Coccia and Todini, 2011), where forecasts from three models are combined to reduce the predictive uncertainty and improve the skill of probabilistic forecasts of low and high flows.

The combination of forecasts from different deterministic models (a ‘poor-man’s ensemble‘) is also the basis of the Bayesian model averaging (BMA) method (Raftery et al., 2005) proposed to correct underdispersive weather forecasts and the Bayesian merging (Luo and Wood, 2008) proposed to explore the use of several climate models in seasonal hydrologic prediction.

With the growing use of ensemble forecasts, the issue of bias correcting ensemble prediction systems was also addressed by the hydrometeorological community. The Bayesian Ensemble Uncertainty Processors (BEUP) (Reggiani et al., 2009) generalizes the HUP approach to the bias correction of medium-range ensemble streamflow forecasts. In meteorology, the Best member method (BMM) was developed to increase the spread of ensemble weather forecasts by ‘dressing‘ each individual forecast with an error distribution derived from a database of past errors made by the ‘best‘ member of the ensemble (Roulston and Smith, 2003). Subsequent studies proposed modifications to the method to include the correction of overdispersion bias and also to allow weighting differently ensemble members (Wang and Bishop, 2005; Fortin et al., 2006). It was claimed that while Bayesian processors offered an appropriate theoretical framework for predictive uncertainty, their implementation in operational contexts was not always an easy task. The need of simpler, but efficient, alternatives was emphasized.

Due certainly to their simplicity and facility of application, approaches based on empirical ‘dressing‘ or quantile adjustments are particularly popular in single-valued or ensemble-based operational ensemble streamflow forecasting systems. They are usually seen as a simplified approach of error aggregation when modelling hydrologic predictive uncertainty. To remove biases, empirical errors or probability distributions for observed and simulated (or forecast) flows are considered (e.g., multiplicative biases dressing, quantile-based mapping, quantile regression approach). Applications may concern short- to long-term streamflow forecasting and usually need a sufficiently large sample of training data for calibration (see, for instance, Hashino et al., 2007; Seo et al., 2006; Wood and Schaake, 2008; Olsson and Lindström, 2008; Kang et al., 2010; Weerts et al., 2011; Pagano et al., 2012; Boucher et al., 2012; Zalachori et al., 2012).

The majority of the authors previously mentioned, as many others in the literature, use the meta-Gaussian model when modeling dependencies between errors and simulated flows (see applications in hydrology in Krzysztofowicz and Kelly, 2000; Montanari and Brath, 2004; Hostache et al., 2011). Basically, the dependencies between errors and predictors are modeled in a transformed space through a Normal quantile transform (NQT), and a back transformation is used to convert the predictive distribution to the space of the original variables. The widespread use of the meta-Gaussian model is mainly due to the fact that it is much easier to theoretically model multivariate distributions when variables are Gaussian.

However, limitations on the application of NQT have been widely discussed, namely in terms of the extrapolation to the tails of the distribution (i.e., the case of extreme events) and of deviates from optimality in estimates due to the back-transformation (see the technical note by Bogner et al., 2012 on NQT application to flood forecasting and the references therein). Alternative techniques and non-parametric postprocessors have been proposed based on Bayesian optimal linear estimation of indicator variables (Brown and Seo, 2010), copula functions (Madadgar et al., 2012), wavelet-based transformation (Bogner and Pappenberger, 2011), kernel dressing (Boucher et al., 2012), and as an alternative within the BMA method (Rings et al., 2012).

References: for detailed descriptions and more in-depth discussions, the reader is invited to consult the references (and references therein).


Part 1: What is hydrologic post-processing? (https://hepex.org.au/hepex-sip-topic-post-processing-13/)

Part 3: Challenges and research needs (https://hepex.org.au/hepex-sip-topic-post-processing-33/)

0 comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.