Challenges of Operational River Forecasting
Contributed by Tom Pagano, a HEPEX guest columnist for 2014
The opinions expressed here are solely the author’s and do not express the views or opinions of his employer or the Australian Government.
I, with ten international colleagues, recently published a paper in the Journal of Hydrometeorology titled “Challenges of Operational River Forecasting” (email me for a copy). Drawing on experiences from dozens of countries, we outlined some of the issues facing forecasters today. This blog post lists some of the main themes of that paper and the research opportunities associated with each.
What do you think? Do you agree or disagree with these observations? Have we missed any themes or research opportunities that are important?
Share your thoughts and perspectives in the comments below.
Challenge 1: Making the most of the data
a) Hydrological data is sensitive and is not freely distributed
b) Data collection is fragmented across many agencies
c) Quality control is a time-consuming manual process
d) Automated data assimilation is underutilized
e) In situ data networks are deteriorating
Both data-rich and data-poor countries struggle with retrieving, quality controlling, infilling, formatting, archiving and redistributing data. Many agencies put significant resources into data management because it is a critically important but extremely difficult task.
Research opportunities: How can we develop comprehensive and robust automated quality control algorithms that synthesize data from different sources to identify outliers and infill missing values? For that matter, how can objective and automated data assimilation routines take advantage of the subjective expertise and situational awareness of the forecaster? How can forecasters make quantitative use of new sources of data whose statistical properties and biases are unknown because of the lack of a long historical record? How can we make optimal use of sparse station networks, uncertain remotely sensed retrievals (radar and satellite) and numerical weather prediction products to provide single-value or probabilistic meteorological inputs to operational hydrologic models? And, critical to the design of forecasting systems and workflows, how to define the point at which quality control systems are sufficiently skilful for inclusion as an automated component of operational streamflow forecasting?
Challenge 2: Getting the numbers right (modelling and forecasting)
a) Rainfall-runoff models are simple and decades old
b) Model development has not been significant in recent decades
c) Skill of river forecasts depends strongly on adequate precipitation forecasts
d) Many important processes are not modelled or are un-modelable
All models are simplifications of real systems, but the details that are included depend on the modeller’s purpose. The model in the top picture is designed to mimic the physical features of the original. The model in the bottom picture is designed to fly. But which is more useful? Conceptual hydrologic models are popular among operational forecasters, but are these relatively simple models necessarily worse than their alternatives? (thanks to Vazken Andréassian for airplane analogy)
Research opportunities: How can the performance of hydrologic forecasting models be quantified so as to support the production of forecasts that have low bias and are probabilistically reliable? How can we increase the agility of process-based models (e.g., find an intermediate complexity that facilitates parameter calibration where needed), and improve the relevance of hydrologic models for conditions outside the calibration period? How serious are numerical errors and how can we take advantage of well-known computational algorithms to ensure numerical robustness of popular legacy models? How can we best transition from calibration methodologies based on hydrograph mimicry alone (which can give the right answers for the wrong reasons) to parameter estimation methodologies that improve model representation of hydrologic processes? How can hydrologists use Graphical Forecast Editor-style (expert derived, gridded) weather forecasts to force hydrologic models? Can the land-surface component of Numerical Weather Prediction models make hydrologic predictions that are competitive with traditional rainfall-runoff models? How can unknown human interferences in the hydrologic cycle (e.g. farming, urbanization, deforestation) be quantified and predicted?
Challenge 3: Turning forecasts into effective warnings
a) In less-developed countries, warning distribution is slow and difficult
b) Relevant warnings require local context and knowledge of community vulnerability
c) Users have diverse needs and technical sophistication
d) Users are unfamiliar with probabilistic and ensemble forecasts
The lack of automated measurements, telemetry, computing resources, and communications infrastructure often limit the value of quantitative river forecasts – they would not arrive in time for users to take meaningful action. Instead, communities rely on early warning siren systems of floods that are already occurring upstream, but even these approaches are fraught with technical challenges (e.g. how to power sirens when the electricity fails?). Above, a woman in Meghuali, Nepal tests a hand-cranked flood siren. (credit: Practical Action)
Research opportunities: What are the most effective methods for the communication of probabilistic and ensemble forecasts? How does the effectiveness depend on the audience? Are there efficient and scalable methods for the collection of local flood intelligence (i.e. metadata about structures and communities at risk)? Can point forecasts (e.g. at river gauges) be effectively and efficiently translated into distributed impacts?
Challenge 4: Administering an operational service (institutional factors)
a) Forecasters are reluctant to take risks for fear of liability
b) Floods can be controversial because rivers are managed by people
c) Less-developed countries face brain drain
d) There is a lack of standards in training hydrologists
e) With increasing automation, the role of the human forecaster is evolving
Research opportunities: How important is it for operational forecasters to be modellers (and/or hydrologists), and to what extent? Which forecasting tasks can/should be automated? How can this automation be designed to create synergies between forecasters and machines? What can researchers contribute to the training programs of forecasters? [And finally, as discussed in a recent HEPEX post:] How can scientists field test experimental techniques under the supervision and on the terms of operational agencies, yet avoid the potential liability associated with forecasts that affect lives and property?
Make your voice heard in the comments below.