NHC Analysts’ Predictive Models Improve Hurricane Season Forecasts

by   |   May 31, 2013 2:42 pm   |   0 Comments

NOAA Atlantic image 650x406 31 may 2013 NHC Analysts’ Predictive Models Improve Hurricane Season Forecasts


Satellite image taken May 31, 2013 of the Atlantic Ocean and coastal regions that are the focus of the National Hurricane Center’s forecasts. Image via National Oceanic and Atmospheric Administration.

With the Atlantic hurricane season underway on June 1, chances are that the storm tracking forecasts from the National Hurricane Center (NHC) will be more accurate than ever.  The Miami-based division of the U.S. National Weather Service responsible for predicting tropical weather systems has reduced its tracking errors by 60 percent since 1990. “We wonder whether we’ll hit limits of predictability because errors have been coming down so rapidly,” says James Franklin, branch chief of NHC’s hurricane specialist unit.

The improved tracking is a result of computing advances that have enabled NHC’s predictive models to become faster and capable of resolving increasingly smaller features in the atmosphere. Also, “the number of data sources keeps increasing,” explains Franklin. But, perhaps more importantly, the NHC has gotten smarter about how it incorporates that data.

Related Stories

Guide to predictive analytics.
Read the story »

Two very different predictive models for the presidential election.
Read the story »

How Predictive Model Markup Language puts big data to work faster for business.
Read the story »

Data visualization: The dwindling Arctic ice cap.
Read the story »

In the 1970s, the center relied on information from ships and weather balloon radiosondes that measure atmospheric conditions like temperature, humidity, and wind. “We had very little information over the open ocean to describe the actual environment that hurricanes were moving through,” says Franklin, who has studied hurricanes for more than 30 years.

In the 1980s, there was an explosion of new information from satellites that continues to proliferate today. But for a decade, the center tried to manipulate that new data—raw radiance measurements of wavelength bands or frequencies that can be used to determine the strength of tropical systems or locate weather fronts—for use in existing models.

“We didn’t know what do with radiance measurements, but we knew what to do with balloon information. So we took that radiance data and tried to make it look like balloon data because our models knew how to run on that,” says Franklin. Those models weren’t very successful. “We really started to make progress when we changed the way the data was assimilated,” Franklin says. “We allowed the data to be what it was and built models that were smarter.”

Testing Models Before Going Public with Forecasts
Given the importance of its storm tracking, intensity and landfall predictions, the NHC is extremely cautious about introducing new forms of forecasting. “When we introduce a new product, it’s only after we’ve done extensive in-house testing to make sure accuracy is good enough to be useful,” says Franklin, who spent 17 years researching storms aboard the National Oceanic and Atmospheric Administration’s hurricane hunter aircraft. The center worked on its five-day hurricane track forecasts for two years internally before taking it public. The standard for 30 years previously was the three-day forecast.

James Franklin NHC 200x200 NHC Analysts’ Predictive Models Improve Hurricane Season Forecasts

James Franklin of the National Hurricane Center

To this day, the center will only offer storm size forecasts 36 hours out “because we don’t have any skill beyond that,” says Franklin. “Over time if you’re careful about what you put out there and have a quality product, you will build out trust with customers and partners.” The center has been testing a five-day “genesis” product—which forecasts hurricane formation—for three years and may release it this year. Early trials of six- and seven-day hurricane forecasts are going well. However, the testing of a product for intensity forecasting of tropical disturbances is not. “That turns out to be hard,” says Franklin. “But we’re not surprised that it’s hard.”

“Everyone wants us to do more and do more and do more, and sometimes the best answer is to say, ‘No, we’re not capable of meeting that need,’” says Franklin. “A lot of meteorologists hate to say, ‘I don’t know.’ I don’t mind saying that. And, as an institution, we don’t mind saying that.”

That approach has earned the NWS and NHC fans not only in meteorology but among predictive modeling experts. New York Times’ data-crunching blogger Nate Silver gave both high marks not only for their use of data but also for their judgment in analyzing that data in his book The Signal and the Noise, contrasting them with commercial sources of forecasts whose judgment can be clouded by other motivations.

“Part of why we have a good reputation is because we are very careful not to go beyond the science,” says Franklin. “We’re not interested in ratings. We have no bias for intensity. If we want people to respond when they tell them something is coming, they have to be confident in what we’re telling them.”

The Human Factor and the Role of El Niño
The NHC’s official forecasts for tracking are generally a bit better than the best model alone and about the same as the average of the best group of models, says Franklin. That’s because the physics of hurricane tracking is well understood. When it comes to intensity tracking, the human forecaster brings much more to the table; the models are not yet fine enough to replicate all the physical processes that go into determining intensity. “We can beat, fairly often, the quality of the intensity guidance that we’re given,” says Franklin.

NHC’s official guidance might beat the purely computational models more often were it not for the center’s concern for delivering a consistent message. “The last thing we want is to have an 11 a.m. forecast that says a hurricane is going to Miami, then at 5 p.m. have it going to Jacksonville, and at 11 p.m. have it going to Miami again. Or go from a Cat 5 to a Cat 1 to a Cat 3,” says Franklin. Any adjustments in forecast are made incrementally. “Even if, deep down, we think a larger adjustment is necessary, we know in six hours we may feel differently about it. Models have no constraints. They give a blind analysis,” says Franklin. “That can hurt us in our scores. And when things are rapidly changing, we will not catch those as well. But in the long run, it enhances our credibility.”

Forecasting the seasonal outlook—like the NHC’s prediction of an above average 2013 Atlantic hurricane season with seven to 11 hurricanes—is a bit trickier.

“Each year, those forecasts get a little bit better, but there are still misses,” says Franklin. “It’s better than guessing.” Predicting the entire season depends in large part on predicting what will happen with El Niño, that warm water in the equatorial Pacific which can impact the Atlantic hurricane season. “We have failures in the seasonal forecast if we don’t anticipate El Niño properly,” says Franklin. “That’s happened.”

Data Applied to Solving an Important Puzzle
Parsing big data to predict the impact of big storms has lessons for any industry seeking to extract accurate information from disparate sources.

“Each piece of data each tells you a little bit about the problem, but you have no one piece of data that directly tells you how strong a storm is or where it’s going,” says Franklin. “You have to try to figure out how much of that is instrument error, or what is real information but not important to you, and put the right pieces together to create a picture.”

Franklin likens it to a jigsaw puzzle. “There are people who are good at it and there are people who aren’t good at it. There are those who treat data as gospel: If I observe ‘a,’ then ‘b’ must be so. But ‘a’ may only tell you that ‘b’ is likely and you have to evaluate that likelihood based on ‘c’ and ‘d’ and ‘e,’” he says. “It’s puzzle solving. I find it fun. I’d like to think I’m good at evaluating what the data is telling me. But you have to have the big picture in your mind.”

Stephanie Overby, a contributing editor at Data Informed, is a Boston-based freelance writer. Follow her on Twitter: @stephanieoverby.

Tags: ,

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>