mgpetre wrote:I guess in that vein of my question I was just saying that at the point that solar flares matter then we have definately gotten the models down to a science. I appreciate the very verbose and informative answer. Are you saying that a certain amount of storm history is put into the model on initialization? I think that would definately be a key to an accurate forecast. What about the question of how far-reaching are the parameters? I see a global model outperforming a localized one for obvious reasons. Again, thank you Wthrman13.
Ah, good questions. To answer the first one, if I understand you correctly, yes, a certain amount of atmospheric history is inherent in the initialization, since the "first guess" field comes from a prior prediction from the same model. For example, in the case of the 0Z GFS (it's actually a bit more complicated than this, but this will suffice by way of illustration), the first guess field is a 6 hour prediction from the previous GFS run (18Z, in this case). The real-world data that is valid at or near 0Z is then blended with this first guess state, to produce the 0Z initial fields. The model is then stepped forward in time from this initialization, and so on. There are other, even more sophisticated techniques out there to initialize models, but this is the basic idea.
To answer your second question, limited area models, such as the NAM, generally are "nested" within a global model, so that the boundaries are continually forced from outside by the solution of the global model. In the case of the NAM, the GFS provides the boundary conditions, but the interior model prediction is entirely the NAM. Typically, the local, or regional model will be designed to run at a higher resolution (more detail) than the global model, so it typically will perform better than the global in that particular region, but not always. Otherwise, there would be little point to running regional or local models.