Because it’s Friday… comments invited!

a-model-run-is-not-a-forecast-640x480

13 comments

  1. a deterministic forecast used to be a forecast

    1. Bart could disagree with that also – in the sense that then, too, a forecaster was required to transform the model run to a forecast maybe?

  2. At least it seems like Bart is advocating ensemble forecasts, which is a step forward.

    1. 🙂 yes but his variability seems underdispersive

  3. Ok I’ll bite.

    If I came across an automated plot of an automated model run output on the web showing the behavior of some variable in the future, I’d view it as a forecast. A roll of the dice, a pattern of tea leaves, a shake of the 8-ball (“signs point to yes!”), can also be forecasts — ‘forecastiness’ depends less on the nature of the information than on the significance assigned by the user. Nowadays, if I’m a user, I want not only your forecaster forecasts, but the raw model run forecasts that they were based on.

    1. I would probably turn it around: if I’m a user, I want not only the raw model run, but (rather) my forecaster forecasts. Surely the forecaster adds to the raw output through correcting for any realities that are not, or not accurately represented, in the model that produced the raw output.

      1. You hydrologists run the risk of falling in the same trap, or entering into the same cul de sac, as the meteorologists 30-40 years ago by seeing your job as a contest between the human forecaster and the “machine”.

        In those days (around 1980) the computer output had all kind of systematic errors (apart from the non-systematic increasing with time) and could easily be ridiculed. But the models got slowly better and their deficiencies could, temporarily, be cured by more or less advanced statistical methods, until model upgrades made the statistical methods obsolete.

        The result today is that meteorological forecasters are marginalized and non-meteorologists are today more influential in weather forecasting because they are not hindered by old traditions and group loyalty.

        1. Interesting line of thinking. I never saw/see it as a contest. My job as a forecaster is to produce the best possible forecast, using whatever information I have available. A model constitutes an important chunk of that information but by no means the *only* information – that’s what I like to think anyway 😉
          I do see your point, though, and am wondering to what extent it applies to “us hydrologists”. I think the model improvements you refer to require considerable means (money, time, effort) that can only be afforded by large enterprises (not necessarily in commercial sense) such as national and supranational meteorological institutions. The institutional playing field of hydrologic forecasting agencies is far different, probably also due to matters of scale and focus (far fewer applications; basins that are of little relevance to other agencies). Any thoughts on that?

      2. True — and there are also likely times when the forecaster, despite good intentions, degrades the accuracy of the model output. Because consistent track records of ‘forecaster forecasts’ are often difficult to maintain or access, it can be hard to benchmark such forecaster tendencies, particularly in hydrology.

        Recent verifications of met. forecasts (automated blends versus human forecaster) have shown that raw or post-processed models do better than is commonly assumed, rivaling forecasters even for ‘difficult’ events. This realization has already changed the practice in the US, with forecasters now doing less grid-editing at longer lead times than they did in the past. Anders has it right in his post below that models are improving, and in addition the number of NWP-based outputs is exploding — which I think should change how a forecaster interacts with them. In any case, a data-driven assessment of how best to design this interaction seems necessary to avoid moving ahead solely on traditional assumptions (from either side) of the relative strengths or weaknesses of models versus forecasters.

        Absent verification or other external information, I look at both model output and forecaster forecasts as ‘forecasts’ — both valid but different in nature.

        1. When I left SMHI in 1991 to go to ECMWF I recommended my colleagues among the forecasters to leave a lot of the routine work, even writing popular weather forecasts for the tabloids, to the met assistants, and lift themselves up to more qualified jobs.

          One of my Swedish NWP colleagues had about the same time said in a newspaper interview that “the meteorological forecasters are over-qualified for what they are doing and under-qualified for what the should be doing”.

          He wanted some of them to join the NWP team. My suggestion was that they should use their skill and experience and go into verification, meteorological post-processing or method development. Those who remained in the forecast office should be more like “commanders on the bridge”, supervisors and the ultimate deciscion makers.

          When I came back ten years later the exact opposite had happened: the meteorological forecasters had taken over the routine job of the met assistant! These had been forced out of the forecast office and, having no other options, then made careers inside and outside SMHI, some even rose to become section heads.

  4. In one of my overheads or Power Point images I have put in the lower left the Forecast Model and the upper right the Ultimate Target, the Truth, the Verification etc.

    In an upper trajectory from Model to Target two obstacles, two non-systematic limitations of the Model are illustrated: the predicted rain arrives too early/too late and/or is too much/too little. This is where the Ensemble will help you out by telling about the confidence. Next obstacle on this road is the Representativeness. The prediction is for a specific grid area and over a time interval which do not neccessarily agree with the Target. However fine mesh we make the Model there will always be small scales we miss.

    In a lower trajectory from Model to Target two other obstacles are illustrated. Whereas the Model in the upper trajectory was assumed to be “perfect”, this is rarely the case. The Model may over- or underpredict the rainfall or other parameters. Waiting for Model improving upgrades some sort of statistical post-processing, preferablhy of the adaptive sort, can help. But the Target might be an “odd” or “unrepresentative” one which will appear as if there w e r e a systematic error, although here Nature rather is in “error”.Again statistical post-processing,preferably of the adaptive sort will help.

    So the Model output are NOT forecasts, as grapes are not the wine.

    1. And what happens when the only “forecast” available is a model output?
      When you have the grapes, but does not have a team and $$ to prepare wine?

      1. Fernando: That doesn’t change what I wrote above. The model output is just the grape (or liquid from the grapes). No harm in drinking it, but better is to try to refine it, make it better. This can be done without much $$$. I lectured about this at the ECMWF Training Courses already in the 1980’s even before we had ensembles:

        1. Keep an eye on possible systematic errors in the NWP and try to correct for them. But watch out for “red herrings” such as the “regression to the mean effect”.

        2. Do not overinterpret details in the NWP which you know are not normally predictable at a certain forecast range. Make a smooth interpretation.

        3. Last but not least: the previous two-three NWP runs are by no means “outdated”. With increasing forecast range their skill approaches the one of the last forecast. A weighted mean of these last runs provides a more accurate categorical forecast than just using the last one and their spread yields a rough, but useful, risk indication.

        This was based on my own forecast experience doing medium range forecasting at SMHI. It stood me in good stead when the ensemble systems arrived in the early 1990s.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.