GCANE wrote:Absolutely.
Intensity forecasting is more on a much finer spatial resolution than what the global models are setup to do.
Hot towers fire off, dry line convetion, outflow boundarys, tilted vorts, tropopause heightening, etc are just too fine scale for the capability of global models.
I think they should create a sub program in the globals where they switch on much finer spatial and time resolution around a potential TC when it is identified. I am sure this will require more processing power but could be achieved with parallel processing.
This certainly makes sense and, from what I understand, such "rezoning" is an established practice in numerical modeling. I'd wonder, though, about the density of input data. Would there be enough data (not calculated values, but real data) to initialize the finer grid with relevant and timely information? Increasing the resolution by a factor of two requires four times as many grid points in a two-dimensional model. In 3D models like these, that would mean eight times as many points. (Hence, no doubt, your comment about the processing power, GCANE.) I guess that would be my main question: would there be enough data to justify or warrant the increase in computational complexity? My guess would be that extrapolating a coarse data field to finer scales wouldn't be good enough, but I certainly could be wrong.