Topic 1: Introduction

Historical background:

References (starred references are particularly important):

Finley, J. P., 1884: Tornado predictions. Amer. Meteorol. J., 1, 85-88. (PDF)

G, 1884: Tornado predictions. Science, 40, 126-127. (PDF). This is by Gilbert.

Peirce, C. S., 1884: The numerical measure of success of predictions. Science, 4, 453-454. (PDF)

Murphy, A. H., 1996: The Finley Affair: A signal event in the history of forecast verification. Wea. Forecasting, 11, 3-20. (PDF)

Murphy, A. H, and H. Daan, 1985: Forecast evaluation. In Probability, Statistics, and Decision Making in the Atmospheric Sciences. A. H. Murphy and R. W. Katz, Eds., Westview Press, 379-437.

First verified forecasts:

Finley's forecasts for tornadoes-Yes/No for 18 areas, 10 March-31 May 1884

Total of 2803 forecasts were made with total of 28 Yes forecasts with tornadoes observed, 72 Yes forecasts without tornadoes, 23 No forecasts with tornadoes observed, 2680 No forecasts with no tornado. Finley reported 96.6% forecasts correct.

Immediately, responses appeared, starting with Gilbert, who noted that Finley would have gotten 98.2% correct if he had forecast "No" all the time. Most of the important issues associated with verification were brought up in the responses. For example,

  1. Baselines of no skill (how many forecasts should you get right by pure guesswork)
  2. Quality of observations
  3. Appropriate measures
  4. What do you do with correct forecasts of non-events?

 

Difference between verification and evaluation:

Verification typically involves comparison of "forecasts" and "observations" or "events."

Evaluation includes response of users to that forecast.

As a result, evaluation is intimately tied to decision-making processes of users.

Why do verification or evaluation?

 

Other fields where forecast verification is (or could be) used:

  1. Economics
  2. Power generation
  3. Sports