New technology and numerical weather prediction - a wasted opportunity?

Harold E. Brooks and Charles A. Doswell III

NOAA/National Severe Storms Laboratory

Norman, Oklahoma, USA

[Article appeared in Weather, 48, 173-177. Copyright Royal Meteorological Society. It is reproduced here with the permission of the Royal Meteorological Society.]

Recently, this journal has seen an ongoing discussion about the future of operational weather forecasting (e.g. Tennekes 1988, 1992; McIntyre 1988). We would like to add our thoughts to that discussion, pointing out an alternative approach to the future use of numerical models.

Technological developments and theoretical advances in this century have brought about revolutions in the way weather forecasting is done, although scientific advances have not always been accepted immediately (Ashford 1992). The implementation of an upper-air observation network changed the way we see the atmosphere. Coupled with the development of the basics of a theory of large-scale motion (baroclinic instability and quasi-geostrophic theory), that observational breakthrough set the stage for taking advantage of the powerful technology of computers. The creation, after World War II, of the first numerical weather prediction (NWP) models (e.g. Phillips 1951) changed the way weather forecasting is done, to the point that the role of human forecasters in the future is expected by some to shrink to insignificance. That issue cannot, however, be divorced from the philosophical issues concerning NWP including how we, as meteorologists, use numerical models and get information from them. McIntyre (1988) has suggested that the information-processing ability of humans actually warrant an increasingly important role in forecasting in the future. However, this is not the direction being taken by national weather services. It has been suggested that the role of human forecasters will be limited to forecasts for less than 36 hours (Friday 1988), despite evidence that forecasters can add significant value to NWP products even in 'routine' weather situations out to 48 hours and beyond (Ricketts, personal communication).

Increases in the speed of computers over the next decade provide the opportunity for another revolution in the world of weather forecasting. Currently, supercomputers used for NWP run at speeds on the order of 1-10 gigaflops (1-10 x 10^9 floating point operations per second or flops). A speed of 10 gigaflops is sufficient for the National Meteorological Center, the NWP operational facility in the USA, to plan to run a 30 km horizontal resolution version of the 'Eta' model over a domain covering the contiguous 48 states beginning in 1993. By early in the next century, however, it is possible that there may be machines capable of running at a speed of 1 petaflop (10^15 flops), 10^5-10^6 times the speed of current machines.

A fundamental issue concerns what should be done with such an increase in computer speed. The mere existence of such power is not a guarantee that we, as a community, will be able to take full advantage of the speed-up. For a wide variety of reasons, some parts of the NWP process may not be able to be sped up that much. To be conservative, let us assume an effective speed-up (including the effects on model performance of more complicated parametrizations of physical processes and 'slow' tasks such as model initialisation and dissemination) of 10^3. What, then, should be done with a computer a thousand times more powerful that we have now?

It seems that the answer universally given by operational agencies planning the next generation of NWP models is to increase the resolution (e.g. McPherson 1991; World Meteorological Organization 1992). Assuming that the time-step of the model must be decreased proportionally with decreasing grid spacing, a 10^3 increase in computing power will allow for a ten-fold increase in horizontal resolution (if the vertical resolution is kept constant). For example, the US 30 km Eta model could become a 3 km horizontal resolution model. In fact, planning is under way for a 4 km operational version (McPherson 1991). Research over the next decade will focus on the formidable challenges of developing new parametrizations of physical processes, such as convection, appropriate to small scales. Current research models have not been successful in dealing with convection using a horizontal grid spacing of 4-5 km (Weisman et al. 1991; Cram et al. 1992)

We believe this approach to exploiting enhanced computer power is, in some ways, a product of historical inertia. Philosophically, our use of numerical models has not changed since the models became an important part of forecasting perhaps three decades ago. We take the 'best' model which we believe will run reliably and run it at the highest resolution that fits into the current computational environment. Improvements in computer power are almost invariably devoted to increasing model resolution. Along the way, physical parametrizations are changed and, occasionally, a new basic model replaces the old one. We might run two or three models as part of the forecast cycle, but they are designed generally to fit different slots, e.g., a local high-resolution model of a single country or a small region and a hemispheric or global model at coarser resolution. A single set of products showing fields derived from those models is then delivered to the forecaster for his or her use as 'guidance'.

Another, quite different, conceptual approach to the utilisation of increased computing power is to carry out large numbers of forecasts at the same resolution used currently, generating an ensemble of evolutions of the atmosphere. The utility of running multiple forecasts beginning at the same time on the global scale has been discussed by Mullen and Baumhefner (1991), but any operational use at higher resolution has been extremely limited, to our knowledge. [Some experimental ensemble forecasting in the 6-10 day range is under way at the National Meteorological Center, but there are no plans of which we are aware to extend this experiment to shorter ranges (e.g. 12-48 hours)].

The major advantage of the single high-resolution mode of operation is that it creates a detailed picture of the atmosphere. When that picture is correct, very accurate forecasts with high temporal and spatial detail can be produced (Droegemeier 1990). However, the disadvantage arises when the model forecast is wrong. Since both current operational NWP models and research mesoscale models typically perform best in situations in which the atmosphere is dominated by large-scale, quasi-geostrophic forcing (Antolik and Doswell 1989; Stensrud 1992), the guidance tends to be at its best when the forecasting situation is, in some sense, the easiest. Thus, when guidance is most needed, it is most likely to be wrong. Since a single product is delivered, the forecaster is left with a purely binary decision to make. The model guidance can be either accepted or rejected. Of course, the forecaster can attempt to make some kind of subjective compensation for perceived errors in the forecast. However, there are relatively few tools devoted to that task and forecasters often are discouraged from changing model guidance. In effect, when the model forecast is poor, there is no guidance. As an example of such an occurrence, we point to the problems associated with the storm of 15-16 October 1987 over southern England (Morris and Gadd 1988). For an extensive discussion of the problems of high-resolution NWP, see Brooks et al. (1992a).

The primary advantage of the ensemble approach is that it can provide a notion of the probable evolution of events. Everyone recognizes that there is some error in the observations taken of the atmosphere. Beyond that, initialisation schemes for numerical models must fill in the gaps between the observations. Hence, the initial state from which the numerical integration begins is uncertain. High-resolution models of thunderstorms have been shown to be very sensitive to small changes in the initial conditions (Brooks 1992; Brooks et al. 1992b). As a result, the inherent uncertainty in the observations could result in major errors in a forecast from models on that scale. The ensemble approach explicitly recognizes the uncertainty in the initial conditions and attempts to take advantage of it, rather than gambling that the initialisation is correct.

Further, Tennekes et al. (1987) have pointed out that "no forecast is complete without a forecast of forecast skill". An ensemble approach provides an idea of the skill of the forecast by indicating the probability of any of the scenarios, which is a measure of the confidence. Further, it provides some notion of the low-probability events, as well as the high-probability events. It has been our experience that operational forecasters have their biggest problems when surprised by the atmosphere. A feeling for low-probability, but still possible, weather should reduce the number of times forecasters are caught by surprise. Again, ensemble NWP would recognise that weather forecasting is inherently a probabilistic process and that probability is the appropriate language of forecasting (Sanders 1963).

The problems with a system of ensemble forecasting are primarily logistical and are found in the initialisation and dissemination stages. It seems obvious that a formal Monte Carol approach would not be a wise use of resources. The 'space' of initial conditions is infinite but, clearly, some regions would not be of interest on a particular day. McIntyre (1988) suggests that a basic set of initial conditions always be run, with provision for additional model runs to be determined by human forecasters specifying the most sensitive geographic locations for modification for that forecast. The result would be, in some sense, a 'directed' Monte Carlo approach.

Beyond simply focusing on geographic sensitivity, a forecaster may also wish to consider other components of the model. As examples, forecasters might look at variations in surface moisture flux in the poorly sampled air above the oceans, or vary the parametrization of physical processes within the model, or modify the data assimilation scheme that determines the initial conditions. Importantly, the nature of the variation could vary from forecast to forecast, depending on the situation. One aspect of human forecast skill would be the ability to make good decisions about the ensemble to be used on a given day. The basic goal is to use the model to ascertain what forecasts are most probable, as well as to determine what the range of possibilities is on a given day. We see this as a task requiring skill at meteorological judgment. The best forecasters would make good decisions about how to craft the model to provide themselves with the best guidance possible for a given situation. In such a procedure, we can take advantage of the unique human skills in visual data-processing, and the interpretation and interaction with those data to which McIntyre (1988) refers.

Dissemination of the model guidance provides another significant challenge. It is easy to define a mean forecast from an ensemble, but the important task of preserving information about a range of possible alternatives is much more difficult. Some way of expressing and displaying the variability of the forecasts will need to be developed. The solution needs to address these questions over a range of scales, since a forecaster in a given location may need different information from a forecaster hundreds of kilometres away and the distribution of ensemble forecast sensitivity may be different in those places. The problem of maintaining information from a large number of forecasts may provide a limit to the size of the ensemble, even on petaflop machines.

The monitoring of observations by the forecaster is critical to the success of our concept. The forecaster must remain involved with the data through the forecast shift. Computer resources will need to be used to compare the model forecasts and the observations rapidly. Flexible methods of interacting with both the data and the models are necessary in this process (Doswell 1992). The comparison of forecasts and observations which would be most useful in the early recognition of which forecast scenario is evolving, allowing the forecaster to concentrate on those data. While the forecaster watches the atmosphere, the computer can watch the model forecasts and, together, they can narrow the range of probable forecasts. As the forecast cycle goes on, this process of narrowing the range of likely outcomes and identifying critical observations would continue. If appropriate, the forecaster could focus on the 'best' numerical forecast and see it as an individual product, much as the current single-model forecast cycle allows.

Clearly, the challenges of designing an efficient ensemble forecasting system for synoptic and sub-synoptic scales to take advantage of computer resources a decade from now are formidable. However, it is not clear that they are any more difficult than those facing the single, high-resolution run approach (Brooks et al. 1992a). Since we feel that, all else being equal, an implemented ensemble-model strategy is preferable to the single-run strategy in an operational setting, we are disappointed to see what appears to us as an unstoppable march towards higher and higher resolution models as the only approach to NWP. If ensemble forecasting is being pursued in the world's operational centres on anything other than a purely experimental basis, it has not yet even been mentioned as a viable alternative in their published visions of the future. We are concerned that failure to explore other options will lead to further diminution of the forecasting skills of humans as the forecaster becomes a slave to the numerical guidance. We echo the alarm of Tennekes (1988) about the lack of vision concerning the role of humans in forecasting. Beyond that, however, we are alarmed about an equivalent lack of vision about the role of technology in forecasting. We have the opportunity to rethink fundamentally our approach to NWP. The time it will take to have models ready to take advantage of petaflop machines may well be of the same order as the time before the machines are available, perhaps a decade. If we do not start now to explore alternative uses of NWP models, we may miss or, at the very least, delay by years, the possibility of a true revolution and improvement in weather forecasting. Taking less than full advantage of both the human and machine side of the forecasting problem would be a crucial mistake.

Acknowledgments

We thank Professor Hendrik Tennekes for his correspondence on this matter and for pointing out several references relevant to the future of forecasting and predictability. Mr Steve Ricketts of the Atmospheric Environment Service of Environment Canada provided us with the information about the ability of human forecasters to add value to NWP products.

References

Antolik, M. S. and Doswell, C. A., III (1989) On the contribution to model-forecast vertical motion from quasi-geostrophic processes. In: Preprints, 12th Conference on Weather Analysis and Forecasting, Monterey, California, USA, American Meteorological Society, pp. 312-318

Ashford, O. M. (1992) Development of weather forecasting in Britain, 1900-40: The vision of L. F. Richardson. Weather, 47, 394-402

Brooks, H. E. (1992) Operational implications of the sensitivity of modelled thunderstorms to thermal perturbations. In: Preprints, Fourth Workshop on Operational Meteorology, Whistler, British Columbia, Canada, Atmospheric Environment Service/Canadian Meteorological and Oceanographic Society, pp. 398-407

Brooks, H. E., Doswell, C. A., III and Maddox, R. A. (1992a) On the use of mesoscale and cloud-scale models in operational forecasting. Wea. Forecasting, 7, pp. 120-132

Brooks, H. E., Doswell, C. A., III and Wicker, L. J. (1992b) STORMTIPE: A forecasting experiment using a three-dimensional cloud model. In: Preprints, Fourth Workshop on Operational Meteorology, Whistler, British Columbia, Canada, Atmospheric Environment Service/Canadian Meteorological and Oceanographic Society, pp. 253-261

Cram, J. M., Pielke, R. A. and Cotton, W. R. (1992) Numerical simulations and analysis of a prefrontal squall line. Part II: Propagation of the squall line as an internal gravity wave. J. Atmos. Sci., 49, pp. 209-225

Doswell, C. A., III (1992) Forecaster workstation design: Concepts and issues. Wea. Forecasting, 7, pp. 398-407

Droegemeier, K. K. (1990) Toward a science of storm-scale prediction. In: Preprints, 16th Conference on Severe Local Storms, Kananaskis Park, Alberta, Canada, American Meteorological Society, pp. 256-262

Friday, E. W., Jr. (1988) The National Weather Service Severe Storms Program - Year 2000. In: Preprints, 15th Conference on Severe Local Storms, Baltimore, Maryland, USA, American Meteorological Society, pp. J1-J8

McIntyre, M. E. (1988) Numerical weather prediction: A vision of the future. Weather, 43, pp. 294-298

McPherson, R. D. (1991) 2001 - An NMC odyssey. In: Preprints, 9th Conference on Numerical Weather Prediction, Denver, Colorado, USA, American Meteorological Society, pp. 1-4

Morris, R. M. and Gadd, A. J. (1988) Forecasting the storm of 15-16 October 1987. Weather, 43, 70-90

Mullen, S. L. and Baumhefner, D. P. (1991) Monte Carlo simulations of explosive cyclogenesis using a low-resolution, global spectral model. In: Preprints, 9th Conference on Numerical Weather Prediction, Denver, Colorado, USA, American Meteorological Society, pp. 750-751

Phillips, N. A. (1951) A simple three-dimensional model for the study of large-scale extratropical flow patterns. J. Meteorol., 8, pp. 381-394

Sanders, F. (1963) On subjective probability forecasting. J. Appl. Meteorol., 2, pp. 191-201

Stensrud, D. J. (1992) Southward burst mesoscale convective systems: An observational and modeling study. PhD dissertation, Pennsylvania State University, 184 pp. (Available from Department of Meteorology, Pennsylvania State University, University Park, Pennsylvania 16802)

Tennekes, H. (1988) Numerical weather prediction: Illusions of security, tales of imperfection. Weather, 43, pp. 165-170

_____ (1992) Karl Popper and the accountability of numerical weather forecasting. Weather, 47, pp. 343-346

Tennekes, H., Baede, A. P. M. and Opsteegh, J. D. (1987) Forecasting forecast skill. In: Proceedings ECMWF Workshop on Predictability, Reading, April 1986

Weisman, M. L., Klemp, J. B. and Skamarock, W. C. (1991) The resolution-dependence of explicitly-modeled convection. In: Preprints, 9th Conference on Numerical Weather Prediction, Denver, Colorado, USA, American Meteorological Society, pp. 38-41

World Meteorological Organization (1992) Current trends and achievements in limited area models for numerical weather prediction research. Programme on Weather Prediction Research Report Series No. 3, WMO/TD No. 510


Return to SREF Page