With all the messages recently regarding the models, I thought this might help clear things up a bit. People often ask how a model can sometimes make such drastic changes from one run to the next. Or why they aren't more accurate given today's technology.
It starts back in the early 1960s at MIT when Edwin Lorenz, a research meteorologist, was running his own program to model the weather. It was extremely simple compared to what we have today - just a small number of variables such as wind, temp, and pressure that were programmed to follow the basic laws of physics. Like the models today, Lorenz's model was deterministic - in other words, if you start with the same initial conditions you will get the exact same result each time you run the model. In 1961, Lorenz wanted to take a closer look at one of his model runs. Instead of starting the run over, he input the numbers from a printout of the middle part of the run. By entering these same numbers, it should have given the exact same output. But it didn't. It started diverging until before long it didn't even resemble the original run at all.
Why? The numbers Lorenz had entered from the printout were only carried out to 3 decimal places. However, the computer used 6 decimal places. Lorenz assumed the difference - one part in a thousand - wouldn't have an effect because the weather instruments were not that accurate. But it did have an effect on the runs, a major effect. It's called sensitive dependence on initial conditions.
This is the same problem we have today... small differences in initial conditions, smaller than our instruments are capable of measuring, can make a huge difference in the outcome. One way to help compensate for this is to look at ensembles. In one type of ensemble, the model is run several times with slightly differing initial conditions. Sometimes the models are run with slightly different physical parameters. If several runs end up basically the same, we can have more confidence in it. Another type of ensemble is to simply look at all (or select) available models - all of which are initialized with slightly different parameters and use their own set of physics. If these models start 'coming together' we can have a much higher confidence in the outcome they predict. Think of when a spaghetti model plot is all over the place compared to when it tightens up and the models start overlapping.
Basically, very small errors input into a model (1008.240 mb instead of 1008.239mb) are multiplied and become larger and larger the further out the model is run. So what this tells us is we need to look not at a single model, and definitely not at just one run. Look for trends that occur in later runs and look for agreement between the models.
Obviously I didn't even get into the biases that different models have, but I hope that helps give a little more insight about the models and their limitations.
Jeff
http://orionweather.com
Models, Ensembles, and all that math stuff :-)
Moderator: S2k Moderators
Forum rules
The posts in this forum are NOT official forecasts and should not be used as such. They are just the opinion of the poster and may or may not be backed by sound meteorological data. They are NOT endorsed by any professional institution or STORM2K. For official information, please refer to products from the National Hurricane Center and National Weather Service.
Who is online
Users browsing this forum: Google Adsense [Bot], HurricaneBelle and 31 guests