Challenges of Operational River Forecasting

Contributed by Tom Pagano, a HEPEX guest columnist for 2014

The opinions expressed here are solely the author’s and do not express the views or opinions of his employer or the Australian Government.

I, with ten international colleagues, recently published a paper in the Journal of Hydrometeorology titled “Challenges of Operational River Forecasting(email me for a copy). Drawing on experiences from dozens of countries, we outlined some of the issues facing forecasters today. This blog post lists some of the main themes of that paper and the research opportunities associated with each.

What do you think? Do you agree or disagree with these observations? Have we missed any themes or research opportunities that are important?

Share your thoughts and perspectives in the comments below.

Challenge 1: Making the most of the data

a)      Hydrological data is sensitive and is not freely distributed

b)      Data collection is fragmented across many agencies

c)       Quality control is a time-consuming manual process

d)      Automated data assimilation is underutilized

e)      In situ data networks are deteriorating

overflowing-shoesBoth data-rich and data-poor countries struggle with retrieving, quality controlling, infilling, formatting, archiving and redistributing data. Many agencies put significant resources into data management because it is a critically important but extremely difficult task.

Research opportunities: How can we develop comprehensive and robust automated quality control algorithms that synthesize data from different sources to identify outliers and infill missing values? For that matter, how can objective and automated data assimilation routines take advantage of the subjective expertise and situational awareness of the forecaster? How can forecasters make quantitative use of new sources of data whose statistical properties and biases are unknown because of the lack of a long historical record? How can we make optimal use of sparse station networks, uncertain remotely sensed retrievals (radar and satellite) and numerical weather prediction products to provide single-value or probabilistic meteorological inputs to operational hydrologic models? And, critical to the design of forecasting systems and workflows, how to define the point at which quality control systems are sufficiently skilful for inclusion as an automated component of operational streamflow forecasting?

Challenge 2: Getting the numbers right (modelling and forecasting)

a)      Rainfall-runoff models are simple and decades old

b)      Model development has not been significant in recent decades

c)       Skill of river forecasts depends strongly on adequate precipitation forecasts

d)      Many important processes are not modelled or are un-modelable

FliersAll models are simplifications of real systems, but the details that are included depend on the modeller’s purpose. The model in the top picture is designed to mimic the physical features of the original. The model in the bottom picture is designed to fly. But which is more useful?  Conceptual hydrologic models are popular among operational forecasters, but are these relatively simple models necessarily worse than their alternatives? (thanks to Vazken Andréassian for airplane analogy)

Research opportunities: How can the performance of hydrologic forecasting models be quantified so as to support the production of forecasts that have low bias and are probabilistically reliable? How can we increase the agility of process-based models (e.g., find an intermediate complexity that facilitates parameter calibration where needed), and improve the relevance of hydrologic models for conditions outside the calibration period? How serious are numerical errors and how can we take advantage of well-known computational algorithms to ensure numerical robustness of popular legacy models? How can we best transition from calibration methodologies based on hydrograph mimicry alone (which can give the right answers for the wrong reasons) to parameter estimation methodologies that improve model representation of hydrologic processes? How can hydrologists use Graphical Forecast Editor-style (expert derived, gridded) weather forecasts to force hydrologic models? Can the land-surface component of Numerical Weather Prediction models make hydrologic predictions that are competitive with traditional rainfall-runoff models? How can unknown human interferences in the hydrologic cycle (e.g. farming, urbanization, deforestation) be quantified and predicted?

Challenge 3: Turning forecasts into effective warnings

a)      In less-developed countries, warning distribution is slow and difficult

b)      Relevant warnings require local context and knowledge of community vulnerability

c)       Users have diverse needs and technical sophistication

d)      Users are unfamiliar with probabilistic and ensemble forecasts

4989903123_b616faeea9_oThe lack of automated measurements, telemetry, computing resources, and communications infrastructure often limit the value of quantitative river forecasts – they would not arrive in time for users to take meaningful action. Instead, communities rely on early warning siren systems of floods that are already occurring upstream, but even these approaches are fraught with technical challenges (e.g. how to power sirens when the electricity fails?). Above, a woman in Meghuali, Nepal tests a hand-cranked flood siren. (credit: Practical Action)

Research opportunities: What are the most effective methods for the communication of probabilistic and ensemble forecasts? How does the effectiveness depend on the audience? Are there efficient and scalable methods for the collection of local flood intelligence (i.e. metadata about structures and communities at risk)? Can point forecasts (e.g. at river gauges) be effectively and efficiently translated into distributed impacts?

Challenge 4: Administering an operational service (institutional factors)

a)      Forecasters are reluctant to take risks for fear of liability

b)      Floods can be controversial because rivers are managed by people

c)       Less-developed countries face brain drain

d)      There is a lack of standards in training hydrologists

e)      With increasing automation, the role of the human forecaster is evolving

Research opportunities: How important is it for operational forecasters to be modellers (and/or hydrologists), and to what extent? Which forecasting tasks can/should be automated? How can this automation be designed to create synergies between forecasters and machines? What can researchers contribute to the training programs of forecasters? [And finally, as discussed in a recent HEPEX post:] How can scientists field test experimental techniques under the supervision and on the terms of operational agencies, yet avoid the potential liability associated with forecasts that affect lives and property?

Make your voice heard in the comments below.

6 comments

  1. A very interesting article, in particular for an “outsider” like me. The only thing I reacted to was the mentioning of communication of “ensembles and probabilities” a couple of times.

    I have said this before and say it again: the ensemble technique is just a technique, perhaps the best, but still only one by many others. The same with probabilities. One of the slides I will show in a few days at a workshop in Madrid on communication of probabilities list seven ways to communicate uncertainty information w i t h o u t using probabilities.

    The ensemble technique and probabilities are essential tools in our trade, but we cannot expect that efficient reception of our warnings depend on everybody understanding them.

  2. As a long time hydrologists-forecaster I am quite sceptic about few items of the post.
    Firstly the automatic data quality control algorithms has a very limited potential. My experience from 2002 flood: majority of data was missing due to flooding of the station or its disconnection from the data transfer. At that time hydrologists needed to work with very basic hydrological methods and use their experience know-how and creativity to face the problem. The specific issue was the Prague water gauge. The rating curve there was too short for a 500y flood and operational extrapolation was wrong in first attempts. However at the time of peak I made an estimate of maximum discharge 5 250 cumsec (without an information about flow measurements, only using the hydrological background and stage hydrograph). Later the official value was evaluated to 5 160 cumsec. I cannot imagine automatic procedre to do the same during the crisis. BTW We tried to use rating curves and travel times frohydraulic models callibrated for the Vltava and Elbe river at that time. But both were proved wrong, luckily we were able to discard them before using them.
    Well the situation has changed but in 2013 flood we still needed the hydrologists experience because of data inprecission. Keep in mind the rating curve is never perfect.
    The second comment is about th role of the forecaster in model operation – I think the hydrologists is the most important parameter of your model. The perfect model is utopia and as long as we operate models we have to reflect it imperfection and the possibility of fail (it always happens when less desirable). For this there need to be the hydrologists who operates and control the model. To do this he needs to understand the model.
    To conclude I do not like the idea of fully automatic system marginizing the role of the hydrologists. This could leave us without a forecast from model (and unexperienced forecaster) during the flood. Additionally I think the local and regional knowledge is a key issue in flood forecasting – how my basin typically behaves and why (and how my model typically behaves and why) is something that makes the difference and reason for keeping the following way of working: the model computes the data and the forecaster makes the forecast.

    1. Jan, great comments. The role of human forecasters is worthy of its own discussion (indeed I think Anders may have a future column on this) and you make many good points.

      When a system is automated, the person many have a diminished role as an operator, but gains a role as system designer and interpreter. Clearly in the examples you provide the existing algorithms were insufficient for the situation at hand. But having gone through the experience, could you design a procedure for the next time such a thing happens? Or was the situation so singular and unique that the lessons weren’t reusable?

      Proponents of automation also suggest that the hydrologist should be elevated from model operator to model overseer/interpreter. Rather than doing routine busy-work, the hydrologist should focus on the “big picture”, extreme events, interpretation, and interfacing with customers. In poorly designed automation however, the person may have trouble regaining control of the process when the automation fails. Also, if the hydrologist is separated from the hydrology s/he may not know know when to question the credibility of the model.

      Take the analogy of the GPS suggesting driving directions… In most situations the GPS comes up with the answer you may have come up with yourself, albeit a bit more quickly. In a few situations the GPS may suggest a better alternative that you would’ve come up with on your own (e.g. it is better at calculating traffic load, knows about new roads). But what happens when the GPS runs out of batteries- can you still read a map? Or when it suggests bad directions- do you know enough about the routes to doubt it? For that matter, do you have the confidence to doubt the GPS or are you inclined to think “it must know something I don’t”? In the distant future with driver-less cars, will you allow (or not be able to prevent) the car from driving you into the lake?

      1. In the meteorological community I advocate continued progress with NWP and the continued presence of human forecasters. But in the same way as NWP modellers cannot continue to motivate their existence by impressive-looking over-detailed computer output, forecasters cannot continue to motivate their existence by referring to some elusive and ill-defined “experience” or “intuition”.

        Forecasters do indeed provide “added forecast value”, else the private companies would not have employed them. But we must find out what this “added value” consist of so it can be strengthened. Defining the forecasters as “intuitive statisticians” aims at finding out what constitutes good and bad “experiences” and “intuition”. Improvement is achieved by weeding out common “urban forecast legends” and education where “experiences” have been analysed and systematized.

      2. Thomas, I like the GPS analogy.

        To your question there are situations you can learn from and somehow design the procedures and algorithms for correction or at least identification of the error (problem). However those I have described in previous comment were (in 2002) absolutely unique. However my concerns are about new future unique (unexpected) situations mostly.

        I have been thinking about the analogy and difference between NWP and hydrological model. I do not remember NWP run fail because of the lack of input data during my 15 years practice. A missing value (verctical athmosphere profile, ground observation) are not that critical because of available interpolation techniques (there will be impact on outputs but limited).
        Concernig the hydrological model the same applies to meteorological inputs (missing raingauge data impacts flow forecast error but you can still interpolate from around and mostly get reasonable accuracy), the issue is different when it goes to discharge. If you miss observations (or you have wrong data due to imprecise rating curve) from water gauge you can not interpolate from other stations…

        So I agree the automatic procedure can be used for inerpolation of few missing values in the hydrograph or identification of doubtfull input data but hardly could be used for its automatic correction. That is the role of the forecaster from my point of view.
        In addition I think forecaster has to understand hydrology as well as the model structure and functions, that is why too much automatization may dig a gap between te model and the forecaster, dont you think?

        1. You don’t remember NWP run fail because of the lack of input data during your 15 years practice. Missing or erroneously interpreted observations is one of the main sources of forecast failures in the NWP. And, by the way, the whole rationale behind the ensemble system is to simulate the effects of these missing or erroneous data.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.