Seasonal forecasts for the Atlantic 2008 hurricane season


It's going to be a modestly more active than average Atlantic hurricane season in 2008, according to the December seasonal forecast issued by Dr. Bill Gray and Phil Klotzbach of Colorado State University (CSU). The CSU team is calling for 13 named storms, 7 hurricanes, 3 intense hurricanes, and an ACE index 20% above average (Accumulated Cyclone Energy (ACE) is a measure of the total destructive power of a hurricane season, based on the number of days strong winds are observed). An average season has 10 named storms, 6 hurricanes, and 2 intense hurricanes. The CSU forecast calls for a 15% above average chance of a major hurricane hitting the U.S. The odds for a major East Coast hurricane are put at 37% (a 31% chance is average), and odds for the Gulf Coast are 36% (30% chance is average). The CSU team predicts that the current moderate La Nina event will weaken by the 2008 hurricane season, but still contribute to lower than average values of wind shear. In addition, warm sea surface temperatures are likely to continue in the tropical and North Atlantic during 2008, due to the fact that we are in a positive phase of the Atlantic Multidecadal Oscillation (AMO), which began in 1995.

The forecasters examined the observed atmospheric conditions and ocean temperatures in October-November 2007, and came up with a list of five past years that had a similar combination of a moderate La Nina event, near average tropical Atlantic sea surface temperatures (SSTs), and warm far North Atlantic SSTs. Expect 2008 to be similar to the average of these five analogue years, they say. The five years were 2000 (14 named storms, 8 hurricanes, 3 intense hurricanes), 1999 (12, 8, and 5 of the same), 1989 (11, 7 and 2), 1956 (8, 4 and 2), and 1953 (14, 6 and 4). Hurricane Hugo of 1989 (Category 4) was the strongest hurricane to hit the U.S. in these five analogue years.


Figure 1. Accuracy of long-range forecasts of Atlantic hurricane season activity performed by Bill Gray and Phil Klotzbach of Colorado State University (colored squares) and TSR (colored lines). The skill is measured by the Mean Square Skill Score (MSSS), which looks at the error and squares it, then compares the percent improvement the forecast has over a climatological forecast of 10 named storms, 6 hurricanes, and 2 intense hurricanes. TS=Tropical Storms, H=Hurricanes, IH=Intense Hurricanes, ACE=Accumulated Cyclone Energy, NTC=Net Tropical Cyclone Activity. Image credit: TSR.

How good are these December hurricane season forecasts?
For the first time, the CSU team presents detailed information informing users of the accuracy of their December forecasts. Past December forecasts by CSU have had no skill, and I've criticized them for not clearly stating this. I applaud their efforts in today's forecast, where it says in the 2nd paragraph of the abstract, "These real-time operational early December forecasts have not shown forecast skill over climatology during the period 1992-2007". Later in the report, they show that the correlation coefficient (r squared), a standard mathematical measure of skill, is near zero for their December forecasts. As an example of this lack of skill, consider the figures presented in the November 2007 verification report. This report stated that 65% of their December forecasts between 1999 and 2007 correctly predicted whether the coming hurricane season would be above or below normal, for forecasts of number of named storms, hurricanes, intense hurricane, and number of days these storms were present. That 65% figure sounds pretty good, but is it skillful? To answer that question, I tallied up how an almost zero-skill forecast would have done over the same period. My almost zero-skill forecast simply assumed that since we are in an active hurricane period that began in 1995, every hurricane season will have an above normal number of named storms, hurricanes, intense hurricanes, and number of days storms are present. The result? My almost zero-skill forecast got it right 65% of the time, exactly the same as the CSU December forecast.

Another way to measure skill is using the Mean Square Skill Score (MSSS), which looks at the forecast error and squares it, then compares the percent improvement the forecast has over a climatological forecast of 10 named storms, 6 hurricanes, and 2 intense hurricanes (Figure 1). The skill of the December forecasts issued by both CSU and Tropical Storm Risk, Inc. (TSR) have averaged near zero since 1992. Not surprisingly, the forecasts get better the closer they get to hurricane season. The TSR forecasts show more skill than the CSU forecasts, but it is unclear how much of this superiority is due to the fact that TSR issues forecasts of fractional storms (for example, TSR may forecast 14.7 named storms, while CSU uses only whole numbers like 14 or 15). TSR does an excellent job communicating their seasonal forecast skill. Each forecast is accompanied by a "Forecast Skill at this Lead" number, and they clearly define this quantity as "Percentage Improvement in Mean Square Error over Running 10-year Prior Climate Norm from Replicated Real Time Forecasts 1987-2006."

The June and August forecasts from CSU, TSR, and NOAA show some modest skill, and are valuable tools for insurance companies and emergency planners to help estimate their risks. The key problem with earlier forecasts is that the El Nino/La Nina atmospheric cycle that can dominate the activity of an Atlantic hurricane season is generally not predictable more than 3-6 months in advance. For example, none of the El Nino forecast models foresaw the September 2006 El Nino event until April or May of 2006. Until we can forecast the evolution of El Nino more than six months in advance, December forecasts of Atlantic hurricane activity are merely interesting mental exercises that don't deserve the media attention they get. There is hope for the December forecasts, since Klotzbach and Gray (2004) showed that their statistical scheme could make a skillful forecast in December, when applied to 50 years of historical data. However, these "hindcasts" are much easier to make than a real-time forecast. For example, before 1995, it was observed that high rainfall in the Sahel region of Africa was correlated with increased Atlantic hurricane activity. This correlation was used as part of the CSU forecast scheme. However, when the current active hurricane period began in 1995, the correlation stopped working. Drought conditions occurred in the Sahel, but Atlantic hurricane activity showed a major increase. The CSU team was forced to drop African rainfall as a predictor of Atlantic hurricane activity.

Hotel owner threatens to sue Bill Gray for bad forecasts
Central Florida's most famous hotel owner, Harris Rosen, has threatened to sue Bill Gray because his bad forecasts have cost Florida billions of dollars in tourist revenue, according to a story published in November 2007 by WKMG Orlando. I think the record-breaking hurricane seasons of 2004 and 2005 had more to do with lost tourist revenue than any forecast by Bill Gray, so this is a rather ridiculous threat. However, these sorts of ugly accusations are the inevitable result of a culture where seasonal hurricane forecasts, which are not very good, are excessively hyped by both the forecasters and the media. The forecasters have set them selves up for such shrill condemnations by putting out these very public forecasts, complete with press conferences, but not properly emphasizing the uncertainties and low skill of their forecasts. By clearly stating their lack of forecast skill, the CSU team's December 2007 forecast is a great step towards improving this situation. The public needs to know that these December forecasts as yet have no skill, and are unworthy of the media attention they get.

References
Klotzbach, P.J., and W.M. Gray, "Updated 6-11 Month Prediction of Atlantic Basin Seasonal Hurricane Activity," Weather and Forecasting 19, Issue 5, October 2004, pp 917-934.

Dr. Jeff Masters