HEPEX-SIP Topic: Post-processing (3/3)

Contributed by Nathalie Voisin, Jan Verkade and Maria-Helena Ramos

So, what are the current challenges and research needs in post-processing? 

At the HEPEX meetings and workshops, several challenges related to the use of statistical post-processors in hydrological ensemble prediction were identified:

  • How to select suitable / best predictors to make an efficient use of prior knowledge and information available at the moment of forecasting?
  • ‘Stationarity is dead; whither postprocessing?’ How can existing postprocessors adapt their modeling approach to non-stationarities in hydro-climatic data and hydrological processes?
  • How can postprocessors contribute to improve the probabilistic forecasting of hydrological extremes? How to calibrate and assess performance for extreme events?
  • How can postprocessing techniques be combined with data assimilation and preprocessing without compromising each other benefits? How to guide users to the best bias correcting strategies to implement in operational systems?
  • How to generate plausible bias corrected ensemble traces to be use in water resources applications? How to evaluate them with a focus on users and their decision making contexts, i.e., how to evaluate bias correction methods with comparable skill according to their capacity to provide useful information in decision-making?
  • How to prevent postprocessing to become black-boxes in operational forecasting systems? How to involve operational forecasters in their conception and validation?
  • How to evaluate existing approaches on the basis of objective and comparable calibration/validation settings? How to go beyond validation under particular settings or catchments? How to promote intercomparison studies for multiple applications (e.g., short- to medium-range forecasting, monthly reservoir operations) and using multiple verification metrics to better assess postprocessors’ strengths and limitations?
  • How can users know which approach might better suit their specific applications and decision-making problems? How to find a balance among different crucial issues in the implementation of postprocessors in operational hydrologic ensemble forecast systems (complexity of the model, parsimony in model parameterization across lead times and catchments, data requirements, good performance)?

 What else do you think should be considered?


Part 1: What is hydrologic post processing? (https://hepex.org.au/hepex-sip-topic-post-processing-13/)

Part 2: Literature review on post processing (https://hepex.org.au/hepex-sip-topic-post-processing-23/)

4 comments

  1. I very much liked the challenges and in particular your point on how to involve forecasters to reduce the risk of the post-processor becoming a black box. I think in terms of method, this unavoidable as there are too many components of a forecasting chain, so expert knowledge will always be limited – this should not be the case in terms of properties of teh post-processor.
    I believe that one of the significant challenges for operational forecasting is to have a post-processor which is robust (can deal with many unexpected situations) and ‘simple’ enough so it can be fixed quickly when it breaks. This fixing has to be done often within a given restricted time as one has to achieve service levels meaning one does not have more than a few hours (rather than in a scientific setting where one can spent a PhD/life time). In our forecasting system (the European Flood Awareness System) as well as many others, the ultimate fixing solution is to switch the post-processor off – but I find that a dissatisfying solution.

  2. In the discussion of the seemingly myriad possibilities for post-processing, one issue is important to dwell on — the compatibility of post-processing in general and of specific techniques with the overall forecasting approach that it seeks to enhance. I would guess that there are few catchment-scale hydrologic forecasting systems in operation today that would truly support more than simple post-processing techniques (e.g., a damped arithmetic blending of the last observed difference between simulated and observed flows). Existing systems typically cannot adequately support the training requirements for post-processing, for various reasons. Systems may contain too many ad-hoc adjustments (such as in the current US NWS paradigm) to present an accurate characterization of expected forecast or simulation errors. Objective systems may nonetheless lack consistency between real-time and training period simulation approaches, due, e.g., to long latency in reporting from the observing network. Even objective systems with consistent forcing approaches may not have sufficient retrospective performance (simulation or hindcast) data to train a complex, parameter intensive approach — and that may be the choice of the system designer who does not wish to allocate effort toward maintaining such archives. In any case, the implications of attempting post-processing for the forecasting system and paradigm as a whole deserves a sober discussion — the better to allow realistic decisionmaking over development efforts in this area.

    1. Hi,

      I agree with your statement. There is a myriad of post processors, but also pre-processors and data assimilation approaches. For each of them and for multiple combinations, systems designers are expected to know:
      – what errors can they correct – mean and/or probabilistic?
      – what are the drawbacks? e.g. assumptions, type of end products ( pdf vs traces etc)
      – best application: floods, medium range, seasonal, type of events
      – best/required implementation: minimum retrospective period for the training, with work best with lumped or distributed model, how will it interfere with pre-processing and/or data assimilation, quality and type of data for the training

      This explains also why in real time you cannot switch from a combination of processors to another without a thorough understanding/training (Florian’s comment).

      The SIP presently includes multiples sections of the different processors and we could perhaps add another section on the compatibility and feasability that summarizes i) the overall need for data to be generated in order to improve the application of processors and improve forecasts and ii) present the compatibility of multiple approaches with each other and with current or future systems (implementation). This kind of section could help us on defining future research and get support to get the appropriate data.

  3. Hi all,

    Certainly, the proliferation of techniques for bias-correcting meteorological and hydrologic forecasts can be a distraction operationally, and this applies to all aspects of ensemble forecasting (and to forecasting more generally ;-)).

    Putting aside the choice of technique, I think there are some more fundamental questions about how to use hydrologic post-processing operationally. For example, under what circumstances does it make sense to separate between the meteorological and hydrologic uncertainties and model them separately (“pre-” and “post-” processing) versus lump them together? This will determine the design of a hydrologic post-processor and the choice of predictors.

    When it is justified to treat the hydrologic uncertainties separately, is it reasonable to assume that there are no residual biases from the meteorological post-processing (or “pre-processing”)? If there are residual meteorological biases, could these be further reduced through hydrologic post-processing by using a hydrologic predictor that comprises the forcing uncertainties (i.e. the hydrologic forecasts rather than the simulations, assuming there was an archive available)? More generally, what is an appropriate choice of predictors in different circumstances? Is it reasonable to use prior observations as predictors if these are accounted for elsewhere in the chain (e.g. through DA)?

    I think there’s a danger in hydrologic post-processing, and ensemble forecasting more generally, that we become distracted by the proliferation of techniques and ignore the bigger picture. To continue Andy’s example from NWS, there are currently two parallel systems in experimental use, one “full blown” ensemble forecasting system, developed centrally, that incorporates a mix of pre- and post-processing, and one much simpler system, pioneered by the forecasters themselves, that uses neither pre- or post-processing and simply propagates the met. forecasts from SREF and other operational models through the hydrologic models. There is a strong argument for a minimalist approach to hydrologic post-processing, operationally. The “do nothing” is probably a step too far, except perhaps in some headwater basins where the meteorological uncertainties are dominant, but the “do everything” (pre- and post-processing) also has some strong disadvantages for operational use, where the forecasters must understand the system to take ownership of it.

    I’d like to see some deeper investigation of the level of complexity (simplicity) of source-decomposition required in hydrologic ensemble forecasting, from which more suitable approaches to hydrologic post-processing may flow. In other words, it’s very important not to separate hydrologic post-processing from the rest of the chain. This sounds obvious, but it’s easily forgotten. I’d also like to better understand the salient features of hydrologic forecasts that need to be bias corrected in an operational setting. For example, what is gained in different circumstances from a conditional correction, conditional upon certain predictors, versus a simple unconditional (climatological) correction? To justify some of the more complex techniques, one needs an understanding of their value-for-money in an operational setting.

    Finally, while I don’t really want to talk about choice of technique for conditional corrections (I think this is a lower-order consideration), it’s worth noting that different practical applications require different types of forecasts, with different demands on post-processing. For example, bias-correcting discrete probability forecasts (e.g. exceedence of some threshold) is a rather different prospect than producing bias-corrected ensemble streamflow forecasts for further uncertainty propagation (e.g. in a water quality model or DSS).

    Cheers,

    James

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.