What is it about?
Forecasters use computer models to help predict the weather. One important and simple way to see how good the computer forecasts are is to check how good predictions of where the high and low pressure systems are at a level about 5.5 km (3.5 miles) above the surface of the earth. But this doesn’t tell the whole story, so we have come up with a new method, called “Summary Assessment Metrics,” or SAMs. The first step to create a SAM is to collect the forecasts for different times (24 h, 48 h, etc. into the future), different levels (near the surface, jet stream level, etc.), different regions (Northern Hemisphere, the Tropics, etc.) and different variables (wind, temperature, etc.). Each forecast is given a grade, or ‘score,’ depending on how accurate it is. Then, the scores are ranked (what we call “normalized”) so the best forecast gets a one and the worst forecast gets a zero. All the normalized scores are averaged to give the SAM. We looked at SAMs for the three best computer models in the world, the Global Forecast System (GFS) run by the National Oceanic and Atmospheric Administration in the U. S., the Unified Model (UM) run by the United Kingdom Met Office, and the Integrated Forecasting System (IFS) run by the European Centre for Medium Range Weather Forecasting.
Featured Image
Why is it important?
We present a new way to summarize the many facets of forecast skill.
Perspectives
Read the Original
This page is a summary of: Progress in Forecast Skill at Three Leading Global Operational NWP Centers during 2015–17 as Seen in Summary Assessment Metrics (SAMs), Weather and Forecasting, December 2018, American Meteorological Society,
DOI: 10.1175/waf-d-18-0117.1.
You can read the full text:
Contributors
The following have contributed to this page