Is a lack of competition affecting innovation in operational river forecasting?

Contributed by Tom Pagano, a HEPEX guest columnist for 2014

The opinions expressed here are solely the author’s and do not express the views or opinions of his employer or the Australian Government.

I (with my co-authors) recently submitted a paper on the “Challenges of Operational River Forecasting”, which discussed institutional conservatism as a result of perceived liability associated with public forecasts. The paper postulated that if forecasters are concerned about liability, they will favor standard operating procedures over innovative – but experimental – techniques that are not “proven”. This conservatism will stymie progress and reduce opportunities to improve accuracy.

We asked “How can scientists field-test experimental techniques under the supervision and on the terms of operational agencies, yet avoid the perception of competition with national services?”

One reviewer countered that competition with a national hydrologic service is to be encouraged, not avoided. The reviewer suggested that the national agencies’ “monopoly” on hydrologic forecasting resulted in less innovation.

Taking the reviewer’s point: is there a monopoly? Why? Is the monopoly having a negative effect on innovation? If so, what can be done?

In meteorology and climatology, there is vibrant competition between agencies, academics and private sector producers. Currently, in developing its El Nino forecasts, the US Climate Prediction Center considers 23 models, from countries including Japan, France and South Korea. Model guidance is open and free and there is a strong culture of forecast verification. Weather forecasting agencies closely track their models’ performance against those of their competitors on nearly a daily basis. Meteorologists sometimes complain that there is too much innovation; that the models are updated so frequently that it is difficult to understand the biases of any given version of the model.

The man with many watches has a greater chance of having at least one watch with the correct time. In this case, meteorologists use numerical weather guidance from the National Center for Environmental Prediction (NCEP), the UK met office (UKMO) and others when creating their forecasts.

The man with many watches has a greater chance of having at least one watch with the correct time. In this case, meteorologists use numerical weather model guidance from the National Center for Environmental Prediction (NCEP), the UK met office (UKMO) and others when creating their forecasts. Modified from the original source.

The equivalent for operational hydrology is practically unthinkable at present. The physics of some widely used rainfall-runoff models nearly pre-date the widespread use of computers. Operational agencies typically rely on a single in-house model and the focus is region-specific, not international. Is the lack of competition in hydrologic forecasting in some way inevitable (e.g. a natural monopoly due to the inherent “local-ness” of hydrology or the specialized needs of users)? Or is it a cultural factor that can be overcome?

In economic theory, there are many disadvantages to monopolies, including low incentives to increase productivity or innovate, reduced quality of products and services and reduced consumer choice. However, monopolies may be economically desirable where it is not practical to have many providers of the same product (e.g. it is a waste of resources to have two lighthouses at a single spot).

In operational hydrology, the desirability of a single source of information may be to avoid public confusion from multiple, conflicting, warnings. Agencies often distinguish guidance (automated model output), from forecasts (general predictions which may have commercial value, such as for optimizing hydropower generation), and from warnings (that indicate imminent danger to lives and property).

While few public agencies would resist the private production of forecasts (and many water managers employ their own hydrologists, either in-house or under contract e.g. Bonneville Power, Snowy Hydro, Salt River Project) there is agency concern about the liability associated with published warnings and the potential to confuse unofficial forecasts with official warnings. Social scientists have shown that recipients of warnings often need to confirm the message before they are likely to take action. If the user finds inconsistent warnings, they may be less likely to act. Therefore, warnings are more effective when a single authority distributes the same message through multiple channels.

Between emergency warnings and closed private forecasts there is a “grey zone” that includes researchers engaged in real-time demonstration projects of new forecasting techniques. For example, the University of Washington makes freely available real-time runoff forecasts for the continental US, and the University of Oklahoma/NASA provide the same globally. In contrast, the European Flood Awareness System (EFAS) and its global equivalent (GloFAS) are trans-national systems but the forecasts are password-protected so as not to interfere with national hydrologic services.

There are benefits to these experimental systems. Monitoring real-time products can inspire new research that improves techniques, and allows the public to see the results of its investment in science. Demonstration of technologies in realistic conditions can help convince agency forecasters that new technologies are ready for adoption. On the other hand, there is concern that poor quality non-agency unofficial forecasts would tarnish the reputation of forecasts in general, damaging the agency brand and credibility.

What do you think? Is there a hydrologic forecasting monopoly in your country? Have you had experiences with unofficial forecasts interfering with warnings (in hydrology or otherwise, e.g. earthquakes, typhoons)? How does the current culture of competition in hydrology affect your work? How would you improve the current system? Please comment below!

14 comments

  1. Demand for hydrologic forecasts is relatively low and costs of market entry can be very high as forecasting infrastructure, data and expertise is expensive. Hence a situation with multiple providers of hydrologic forecasts is not likely to happen. However, competition may occur if a forecasting contract comes up. For example, conceivably a dam operator would buy its forecasts from a third party – and that contract may be renewed periodically. A way of multiple providers to co-exist simultaneously is if they can make use of the existing infrastructure and add value to that at relatively low cost.

    1. As Jan states, one of the biggest limiting factors for independent hydrological forecasting are the costs associated with data and infrastructure. However, if organisations such as the Environment Agency begin to release data for free (http://www.theguardian.com/technology/2014/feb/27/flooding-river-levels-environment-agency-open-data-maps-apps), I wonder if this might begin to change.

      1. It’s a very interesting point – the connection between data access and 3rd party forecasters. Easy and free access to streamflow data allowed me to develop many “skunkworks” tools as a forecaster in the US. When our agency also made its climate data easily accessible over the internet, we then connected it with the free streamflow data and built a forecasting application that we could give to key users so they could make their own forecasts. Of course everyone understood that the official forecasts were the final say, but it increased the transparency of our process to let the users look at the same data and run the same (or similar) models themselves. Ultimately they had more confidence in the official forecasts but also a better appreciation of the uncertainty.

  2. As I wrote in a recently submitted project proposal, in the public, academic and private sectors have coexisted for many years as providers of real-time hydrological forecasts. While the public sector has focussed on flood hazards in large river systems the private sector has focused on the production of forecasts for hydropower systems. The academic sector was well able to bridge knowledge to both the public and private sectors. For instance my research group and a private company are jointly developing a HEPS for southern Swizerland combining hydrological and hydraulic modelling. The final product will be implemented in FEWS. This effort is funded by regional and national administrations, which will later on be responsible for issuing warnings.
    So here we can really speak of co-existence and awarness of the different responsabilities among players.

    In meteorology other facts are true. The public sector is responsible for data collection, runs operational NWPs and issues OFFICIAL alerts. Since weather forecasts have more visibility than hydrological predictions (TV, Newspapers, Internet and APPS), the private sector invaded the market. They buy data from the national service and from other NWP providers and issues warnings. They put THEIR product on the marked, but when data are missing they claim the national service is not able to provid “raw” data (a precisation here, the consider high quality post-processed weather radar images as raw data). This went so far that the national administration needed to limit the warning activity of private actors. A common platform for natural hazards warning has been developed and, starting from “level 4” (out of 5) warnings only national administrations are allowed to warn and all media should dispatch their warning (http://www.naturgefahren.ch/, in German). This has been called the “single official voice principle”. This approach emerged from the need of meteorology, but now this is already prepard for hydrological hazards and other natural hazards (debris, landslides, avalanches and earthquakes). So, competition can also emerge in hydrology. The public sector is ready to avoid double-channel warnings.

    Last point: In Lyss (a town close to Berne, the Swiss capital) some years ago people had to clean their cellars three times within fews weeks. After each flood, people downloaded another APP yielding warnings. So, competition can be also triggered by false alarms and missed-events.

    Cheers!

    1. I think Swissrivers.ch is a site that provides an incredible level of river forecasting detail over Switzerland. It is free and open to the public. There are hydrographs with uncertainty bounds for what seems like every kilometer along the river, even where there are no gauges. Often times the results look good (because of error correction?) but sometimes there can be big differences between simulated and observed such as the attached. Does this ever cause a problem for the official forecasters in Switzerland?

      1. The site has been launched some years ago with the intent of being a teaser for hydropower companies, people wanting to know if they can go fishing during the week-end and other private stakeholders. Few regional organization are now customers with tailored applications. The people maintaining it are the persons I am collaborating with for southern Switzerland.
        To date there was to my knowledge no situations where the public administration complained about warning activities stemming from Swissrivers. In my opinion there were no problems, because the Swissrivers team avoids to make inadequate “posteriori” statements on how good they would have been as compared with the forecasts of the official forecasters”. While private weather forecaster often went to the media just after events and claimed their forecast was perfect.
        Some weather forecasters started some years ago a small offensive on the hydrology marked together with a big international player in hydrologic/hydraulic modeling. So far I have not heard of any realization of these products.
        My experience is that users of hydrological forecasts still need to have a personal relation to the forecasters and the developers of the prediction system they are using.
        Some years ago Michael Bruen was impressed to ear from an enduser that he was estimating the quality of a forecaster opinion by looking in his eyes and estimating if there was “fear of failing” in his glance. This you to say that in my opinion users of hydrological forecast are not yet read to trust automatic forecasts from private players.

        Ciao!

        1. Interesting… I like the “fear of failing” measure. Maybe we should include a FOF score in our verification. Apparently there’s a real phobia- Atychiphobia (the greek translation is fear of the unfortunate).

  3. Tom Pagano wrote: “Meteorologists sometimes complain that there is too much innovation; that the models are updated so frequently that it is difficult to understand the biases of any given version of the model.”

    At a Nordic meeting in 1976 a representative from the Swedish military meteorologists in his talk demanded a freeze of NWP development, exactly for this reason. After the 1982 ”ECMWF Seminar on Interpretation of Numerical Weather Prediction Products, 13-17 September 1982” the modellers were still very upset. Some of the statisticians had obviously suggested, if not freezing NWP development, at least keep old versions of the NWP models running to ensure stability of the coefficients in their statistical schemes. t

    In the proceedings at http://old.ecmwf.int/publications/library/do/references/list/1616 you will find some very good summaries of traditional statistical post-processing techniques (e.g. Glahn and Wilson) .

    1. As an operational flood forecaster in England I would welcome more competition and innovation. We simply do not have the resources to calibrate, configure and maintain numerous model types. Likewise we make use of staff from across the organization to carry out some forecasting roles and the more model types there are in use the harder it becomes to interpret and add value.
      I would also like to see improved rainfall runoff models, but the main restriction on real time forecasts is the forecast rainfall input rather than the model physics.
      On Met model changes, it would be great to have consistency so we could better assess likely real time performance.
      Thanks Mike for putting me on to this great page

  4. I believe that there is at least some form of competition, which not necessarily on the same domain, but inter-agency. Meaning that systems compete on their functionality in same way as there is a competition between models (e.g. multiple commercial providers of inundation modelling software). Although this maybe in it’s infancy in forecasting, I believe it already exists in parts.

  5. A great topic to discuss. There are few observations and experiences from the Czech Republic (BTW experiencing a minor flood right now).
    Concerning the monopoly I thing the most important is the role of the responsibility. The law defines the responsibility of the flood (authorities) commities to organize the flood prevention and protection. However they of course need information and forecasts and warning therefore the law sets the responsibility for flood warnings to CHMI as a national hydrometeorological service.
    BTW It is quite hard work to keep this monopoly for warnings and flood forecasts despite the law.
    The main problem is as follows: the new player on the field (e.g. university) decides to do flood modeling so they demands for the data and want National hydrological and meteorlogical service (NMHS) to provide it for free (it is a service for people in crisis). Then they run automatic computation because they are not able to have 24/7 service and publish the results. If the forecast fail thay blame NMHS for wrong inputs (not precise QPF etc., missing discharge data). The thing is, floods are extremes – times of higher probability to face missing data trasmission failure, wrong extrapolation of rating curve etc.

    Now have a look from the point of view of a mayor of the small town who has a responsibility to act during the flood:
    option 1 – You hired someone to provide you forecast and it failed and had some consequences (damage) – your false you hired bad service
    option 2 – You received 3 forecasts from various providers, you made a decision according the worst case or median but the reality was different – your false you should act according the best forecast you have (seeing from the after flood perspective)
    option 3 – you received one official forecast and warning and you act according it – you did the best you could and if the forecast was wrong you should push the NMHS (Government) to be better next time.

    last comment: I dont remeber a case that University or research team proposed to develop a new modelling forecasting model/tool/system according the needs and under the guidance of NMHS to be operated by NMHS.

    BTWcheck hydro.chmi.cz/hpps/

    1. It’s comments like these that make me grateful for writing this post.

      On your last question about research teams developing forecasting systems for operations, Australia has sponsored, for about six years, a major research program in hydrology (called WIRADA). The Bureau of Meteorology (the NMHS) funded the Commonwealth Scientific and Industrial Research Organisation (CSIRO, a set of non-academic government research labs) to develop better river forecasting systems. In some cases they were new systems for where no service existed before, such as seasonal and short-term (i.e. up to 7 days, but not including floods). Aside from some of the data infrastructure to feed the models in realtime, the Bureau has mostly adopted CSIRO’s techniques wholesale. A link to the seasonal forecasting system is here:
      Bureau of Meteorology Seasonal Streamflow Forecasting Service.

  6. To discuss whether “.. a lack of competition affects innovation ..”, we need to consider the key stages of the weather/hydrological operational forecasting process, and investigate whether there is room for competition in all of them.
    Let’s first of all agree on why we do operational weather/hydrological forecasting: I would say that we do it to manage weather/hydrological related risks, or more specifically, to avoid casualties and reduce losses. If you agree with me that this is our key objective, I think that we can break down the process into four key stages: (a) ‘forecast production’, (b) ‘forecast formulation’, (c) ‘communication’ and (d) ‘decision making’. With ‘forecast production’ I mean generating the forecast data; with ‘forecast formulation’ I mean extracting from the data the forecast message(s) and alert(s); with ‘communication I mean broadcasting them to the public; and finally with ‘decision making’ I mean taking actions to avoid potential losses (e.g. evacuating an area).
    I would argue that there is room for competition in stages (a) and (c), but that most likely we might not want competition in stages (b) and (d).
    Stage (a) – Competition in this stage could be valuable, and could lead to better, possibly cheaper forecast products. It could also foster innovation. I do not see any problem with competition in this stage, and indeed this is happening at all forecast scales (from the short to the seasonal time scale, in meteorology and hydrology). Governmental (NMHS) and intergovernmental (e.g. ECMWF) Institutions have a key role to play, to provide the public (the tax payers who indirectly fund them) relevant forecasts, and to make sure that investments in other sectors (e.g. in satellite missions) deliver valuable and relevant products. Competition in this area should be encouraged!
    Stage (b) – The key aspect of this stage is that a high level of service must be guaranteed. In other words, there is a clear need that skilful forecasts are issued, while poor forecasts are not disseminated, to avoid ‘noise’ covering the ‘signals’. Who can guarantee that a high quality service is achieved? There must be someone, let’s call it an Authority, who guarantees that a high service level is guaranteed. I doubt that competition would be beneficial in this case. The Authority must include the most talented experts, and should guarantee that only the best forecasts are used. The Authority can use forecasts issued by different entities who compete in the forecast production stage, but I have doubts that competition could be beneficial in this stage. Lack of competition in this stage might be a requisite to guarantee a high quality service.
    Stage (c) – Once the Authority has extracted the signal from all available sources, when the forecast is ready, it has to be communicated to the public and the relevant bodies (protection agencies, ..). Here again there could be competition. The Authority would have to specify the type of communication it has to be guaranteed, and public and private entities could compete to deliver it. Provided that the Authority is clear on what is the message that has to be communicated, to whom it should be communicated, and by when, competition could play a role. Competition could trigger innovation, develop new avenues for communicating the key messages/alerts identified in stage (c), and should be encouraged.
    Stage (d) – This stage requires that a pool of experts decides which actions must be taken to reduce casualties and losses. It requires a very high level of coordination and synergies, to maximize the utilisation of the available resources. This stage is again, very likely, better fulfilled by one body, possibly the same Authority that guarantees a high quality service. Lack of competition in this stage might again be a requisite for a superior service.

    1. Thanks Roberto for your analysis and insights.

      To clarify, would you say your 4 stages are like 1. “Creating the forecast numbers” 2. “Translating the numbers into relevant and understandable terms for the user” (this might include preventing users from seeing likely bad forecasts) 3. “Transmitting the message to the correct user” and 4. “Making a decision based on the forecast”. And you’re recommending competition in 1 and 3 but not 2 and 4?

      I wonder what would you think of a government forecasting body that outsources stage 1 but maintains control over stage 2? Imagine a National Hydrologic Service that has a contract with a private company to run models and make forecasts. Those are then sent to the government (but not to the public) where a government hydrologist interprets the information and possibly adjusts it to make the official forecasts. The official forecasts are then free to the public. Have you seen that and/or do you think it would work? Or is it a bad idea to separate the modelers from the “formulators”?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.