Sunday, June 08, 2008

I Have a Weather Prediction Theory

This isn't as pretentious or self promoting as Lezak's Recurring Cycle (or LRC) theory.

I don't even have a cool name for it yet, or any cool graphics and can't stay up late enough to come up with any.

Maybe my readers can submit some possible names and graphics.

Here is my thesis.

Up until about 18 months to 2 years ago, weather forcasts in Kansas City were pretty accurate. In particular, KSHB's Lezak forcasts were the Gold Standard.

As far back as 1993, I could plan an entire outdoor wedding around Gary Lezak's forcasts without incident or disappointment.

But lately, they seem to get it wrong a LOT!

Here is my hypothosis in plain english.

I think that the detection tools that show meteorologists what is happening in the atmosphere have outpaced the modeling tools that tell them what that data means.

Way awesome 3D doppler radar graphics and shit. But the computer models that try to analyze the proverbial flap of a butterflies wing in Asia and project it into a hurricane a half a planet away in the Gulf of Mexico seem to be a few generations behind the ability to display cool storm shit in HDTV.

They can SEE what is going on in more detail than ever before...they just don't know what the fuck it means anymore.

So what does that mathematical formula look like?

Sader? Spaulding? Bueller?

Help me out here.

5 comments:

Keith Sader said...

That could be part of the reason - more data/less information. I'm not sure what weather models they're using or how their implemented.

It might be the simpler case of their being lied to by their software vs. actually running their forecast numbers through the old systems that were more accurate.

Parsimony is an evil mistress.

SmedRock said...

I think they may have relied too much on the software, The have two models they usually refer to on air, the European model and some other one. People being lazy have sat down and let the computer do the work. No longer do you need to have any knowledge of the theory behind the job they do.

Computers are sonly as smart as the programmers that made the software, too much data, nah. Not enough energy to actually verify the readings they have.

Just my .02 cents.

Satyavati devi dasi said...

This is just an idea, but could part of the issue be that yes, they have all this data, but the software they're using isn't taking into account all the new influences and changes in patterns since it was designed? I mean, with the ozone thing and global warming affecting everything from currents to jet streams maybe the software is misinterpreting the data because it can't account for these things.

Is this basically what you just said?

I can't even come up with enough of a mathematical formula to balance a checkbook, but you would think with all this weather control shit I keep hearing misty rumours about that someone's got something more accurate somewhere.

But if 2+2 doesn't equal 4 anymore in terms of weather, it stands to reason that a program designed for when it did equal 4 isn't going to be accurate anymore.

Which I guess is a wordy way of saying I'm with you on this one.

Anonymous said...

Ive been thinking about this all day. The best I can come up with is the "holy fuck didn't see that coming" weather event which really isn't all-purpose and doesn't break down into a smart little acronym. I will say it happens in behavioral data collection too.

Nightmare said...

Well to quote my good friend at Shawshank"

"XO I do believe you are talking out of your ass"

I'm just saying...