A user-oriented forecast verification metric competition
Forecast performance is one of the most central themes not only in day-to-day weather forecasting, but also in HEPEX.
It is so important that we have devoted an entire chapter in our science and implementation plan to it (see here). I am, in particular, often forwarding the link to these blog posts when I am explaining (or trying to explain) forecast properties to a forecast user.
Nevertheless, many of the scores remain abstract. Whilst a forecast bias may still be easy to communicate, trying to get across what a root mean squared error is, is already far more challenging – and I haven’t even started with probabilistic scores.
There is no question that we need these scores to optimize and develop our forecasting systems, however, they are a “communication nightmare”. In HEPEX, we have developed games in trying to easy this communication (remember the peak box game from the HEPEX meeting in Maryland and, recently, in Quebec?).
Therefore, it is great that this communication nightmare is now also recognized by the verification component of the World Weather Research Program. They issued the challenge to develop and demonstrate the best New User-Oriented Forecast Verification Metric.
The challenge is cross-cutting and cross-disciplinary. It considers all applications of meteorological and hydrological forecasts that are relevant to user sectors such as agriculture, energy, emergency management, transport, etc. The metrics can be quantitative scores or diagnostics (e.g., diagrams), but they must be new to be considered for the prize.
- Do you have an idea to propose?
- Do you already use a score which would be ideal to the WWRP challenge?
- Do you have a very specific user who would benefit from a very specific score?