The Possible Future Role of Humans in Weather Forecasting

(The following is a personal opinion and does not necessarily represent the views of the National Severe Storms Laboratory, Environmental Research Laboratories, National Oceanographic and Atmospheric Administration, Department of Commerce, or the United States government. Discussion with a large number of people over the past 7 years have influenced the viewpoints here. Since some of those people may not agree with the viewpoints expressed, I won't name them in this forum. I welcome comments from others.

Harold Brooks [brooks@nssl.noaa.gov])

There are a number of points that must be addressed in discussing this issue. The fundamental one is:

How much value do people add to weather forecast products?

The proper baseline from which to approach the question of "value" is that of the guidance available from numerical weather prediction (NWP) products. If the answer is "none" or "very little", then it seems obvious that the costs associated with maintaining a staff of highly-paid individuals to do little more than filter the NWP guidance cannot be justified. While I believe that humans add value to forecasts, a basic problem at the moment is that we do not know the answer to the question and, in fact, how to measure value is an important question in and of itself. It has many pitfalls and I'll illustrate at least one. For simplicity, I won't address any of the questions associated with significant improvements in the philosophy of NWP, such as the use of ensemble techniques on the short-range, which might necessitate forecaster-driven specification of the initial conditions of the forecast model.

As a simple example, consider a forecast of minimum and maximum temperatures. (The arguments can be made for any other meteorological parameter, but temperature illustrates most of the points and is a "simple" thing to forecast.) Setting aside the view that by using a "pick-a-number" approach to temperature forecasting (e.g., today's high will be 61 and tonight's low will be 35) gives very little information to the user of the forecast, the basic yardstick for comparison is the model output statistics (MOS) forecast of temperature. For the US in 1993, the nested-grid model (NGM) MOS 24-hour maximum (from the 0000 UTC run) and minimum (from the 1200 UTC) temperature forecasts were within 5 F 85% of the time and within 10 F over 98% of the time. Given the accuracy of temperature measurements, mesoscale variability of temperature, and human perception, there is probably very little that can be done to add value to a temperature forecast that is off by 5 F or less, with the possible exceptions of the rare cases where the temperature is in the vicinity of the freezing point. Thus, on average for the US, there is at most one temperature maximum and one temperature minimum forecast per week to which value can be added, and perhaps a total of 14 per year for which there is an opportunity to make a big difference out of the 730 total temperature forecasts.

MOS is clearly an excellent forecast almost all the time and departing from its guidance most of the time is a poor forecasting strategy in a practical sense. "Shading" MOS by a degree or two is a waste of time. Let's consider two forecasters, both of whom have some "perfect" knowledge. Forecaster A knows when MOS is going to be absolutely right in its temperature forecast, so he or she passes MOS along that day. On the other days, A shades MOS by 1 F in the right direction. Forecaster B, on the other hand, knows when MOS is going to be correct to within 10 F and knows exactly what the temperature will be on all the other days. On the "good" MOS days, B issues MOS as the forecast and, on the other days, issues a forecast that verifies exactly. How much value did these forecasters add to the numerical product. Using the 1993 US NGM MOS forecasts, the RMS error improvement associated with A's strategy is 17.4%. For B, it is 11.1%. However, A has added no practical value, while B has reduced the most egregious errors. The vast majority of cases, where value essentially cannot be added, overwhelms simple statistical measures, as does a strategy which is nothing more than shading MOS. I would argue that it is the cases in which the guidance errors are large that are the most important. Missing a high temperature forecast by 30 F is clearly a bad thing and users will make poor decisions based on that forecast. Except for extremely sensitive uses, an error of 5 F will have little impact.

There is a Catch-22 to this. NWP models work best when the atmosphere is nearly quasi-geostrophic, which, in some sense, is the simplest forecast situation. Thus, it is the hardest forecasts for which MOS will fail and the forecaster will get no help from the models in those cases. The forecaster will have to fall back on their knowledge of the atmosphere in order to make a good forecast.

Note that there is an important initial step that must be taken in any forecast. The forecaster must be able to identify those days in which value can be added. To do so, the forecaster will have to have the ability to understand the model, what it's doing, how it works, and how the atmosphere works. To do so, the forecaster will have to have a much larger (and better-honed) assortment of tools than at present.

Fundamentally, as the accuracy of NWP models has improved over the years, the opportunities to add value to the forecast have decreased. As technology and the science of numerical modelling improves, the human forecaster is going to be left with two basic areas of forecast responsibility, in this vision of the future:

1. "Rescuing" the NWP forecast from huge errors. These errors represent the hardest, but yet, the most important, forecasts. As a simple example, missing frontal passage by 6 hours and missing the temperature at which precipitation occurs during rush hour in a major city by 10-20 F can, in the right circumstances, cost the public hundreds of thousands of dollars in car repairs.

2. Rare, severe event forecasting. Hazardous weather of all sorts will continue to be difficult for NWP to handle in detail since it will depend, in many cases, on things which the model cannot forecast at the level to which the atmosphere is sensitive. For instance, cloud modelling suggests that errors on the order of 1 C at 700 hPa are sufficient to cause incorrect forecasts of convection. The human forecaster should continue to play a critical role in this arena.

In order to add value in these areas, future forecasters will have to use the science of meteorology in forecasting. Forecasting convection, for example, which would be critical in both areas, is extremely difficult. Doing it well requires a knowledge of a vast range of scales of motion and behavior in the atmosphere. Human skills of pattern recognition and information processing should continue to be critical. It will require better-educated, better-trained forecasters. The lack of training given to entering forecasters in the NWS is shameful. The pro forma dismantling of the excellent training program in the Atmospheric Environment Service in Canada by the expedient of not hiring any new meteorologists is, I predict, going to be an extremely costly decision in the long run. It involved a year of intensive training which connected theoretical understanding and practical application, at the end of which approximately one-third of the applicants were failed. As a result, the quality of AES forecasters with whom I have had contact over the years, has been uniformly high.

There is a significant probability that humans will be out of the forecasting loop, except for the presentation end, within 20 years. Part of this is due to a self-fulfilling prophecy aspect of evaluation of forecasters by management. They may be looking, for budgetary or other reasons, to show that there is no value added by humans. Many seemingly reasonable measures of added value will mask any true value of human forecasters. As a result, the perceived value may be low, even if humans are helping in critical forecast areas. If humans are not viewed as adding value, they will not be a part of the process.

The second reason for taking humans out of the loop is that they may, in fact, not be adding important value often enough to justify their existence. If only a small number of forecasters are capable of doing so, their contribution will be masked in the overall scheme of things and they won't be kept around, either.

In order for humans to remain an important part of the weather forecasting process, I believe that a fundamental change in our approach to the problem is necessary. It will take several steps:

1. better education and training,

2. an intelligent approach to the design and use of forecast verification, (I would hypothesize that a possibly useful scheme would involve determining how well forecasters can distinguish between days on which value can be added and days on which it cannot, followed by how well they do on the latter.)

3. a comprehensive effort to understand how good forecasters actually operate.

The latter point is woefully overdue. I know many forecasters who I believe are excellent forecasters. I know many who are not. Except in the grossest way, I do not know how they go about making their forecasts. It seems obvious that understanding the differences in how good forecasters make their forecasts should be an important point in improving the skills of others. One would surmise that a great deal of subconscious information processing takes place and that experts in the field of cognitive psychology might be useful in this work. It would also help in hiring new forecasters. In general, we do not know what skills we should be looking for in identifying entry level personnel with great promise.

If humans are going to have a role in the future of weather forecasting, we are going to have to hire good people with a scientific understanding of meteorology, train them well, and provide them with the appropriate technology to do their jobs. At the moment, I submit, we do not know how to identify good people, we don't train them well, and we don't give them the appropriate technology. Left to the present course, the future of human beings in operational weather forecasting is bleak.