Science! Strikes Again: Saving the Theory from the Data

An amusing headline: For 35 years, the Pacific Ocean has largely spared West’s mountain snow from effects of global warming. A “study” by “scientists” is used to explain what, in a saner world, might simply be stated as follows: “Western mountain snowpack shows no evidence of global warming over the the last 35 years.”

In the article, we learn that models predict that snowpack in Washington’s portion of the Cascade Range, for example, should have fallen by 2% – 44% over the the last 35 years, but in fact have shown no significant decline. Now a crass, narrow-minded person, clearly not in the cool kids club, might leap to the conclusion that the data here contradicts the model, therefore – you’re sitting down, right? – the model is wrong. The whole purpose and entire source of validation for a model is predictions. You build a model hoping to capture some aspect of the real world. You use this model to make concrete, measurable predictions that can be checked against the real world, to see if your model is useful. If the facts don’t match the predictions, you throw out the model and start over. This is called science.

Here, instead, the study invokes a cause not in the model. We know this cause was not in the model since, if it were in the model, the model would have presumably produced useful predictions.

“There were a lot of discussions within the department of the surprising stability of the western U.S. snowpack, because it went against the predictions,” said co-author Cristian Proistosescu, a postdoctoral researcher at the UW’s Joint Institute for the Study of the Atmosphere and Ocean.

The discussion did not, evidently, include the obvious conclusion required by basic science: our model is wrong. Nope, this inescapable conclusion is masked behind an appeal to additional causes. Natural variations in the Pacific Ocean kept the snowpack stable, it is asserted.

Stop right here: if your model needs to appeal to factors outside itself, factors not built into the model, that means your model is wrong. Call it incomplete if you want, but the short, English word for that state where the model does not provide useful, validated predictions is ‘wrong’. Throw it out. Build a new model that includes the newly-discovered (!) causes, if you want, make some more predictions, and see what happens. But clinging to a model that’s been proven wrong by real world data is pathetic, and patiently anti-science.

It’s not just the Western U.S. mountains that fail to validate those models. It’s not like the hundreds of different climate models floating around have some sort of sterling track record otherwise, so that we’d lose predictive power if we just tossed them all. No, they all predict that the earth would be much warmer now than it actually is. The Arctic would be ice free by 2000 2013 2016 2050. (Pro-tip: always make your predictions take place out beyond your funding cycle, to mitigate the slim chance people will remember you made them by the time the next grant proposal needs filing.)

A slightly – very slightly – more subtle point: we all know there’s such a thing as ‘natural variations’ in all sorts of areas. In practice, especially when building models, natural variations are nothing more than a collective name for causes we don’t understand well enough to build into the model. Even admitting the existence of natural variations that affect the thing being modeled that are not included in the model is to admit the model is at best incomplete.

One might leave out potential causes on the assumption that, while they might theoretically affect predictions, in practice they are not material. When we say acceleration under gravity at the earth’s surface is 32’sec^2, we leave out air resistance (and air pressure variations, and humidity, and no doubt a bunch of other things) because that formula has proven to be useful quite a bit of the time. Only in very fussy situations do we need something else, as long as we’re testing near the earth’s surface.

We know we can ignore some complexities only because we used our model to predict outcomes, measured those outcomes and found them good enough. To admit there are natural variations that render a model’s predictions useless is to admit that the phenomenon being modeled is beyond our skill as modelers. No amount of statistical sleight of hand can make this go away.

Another issue is the baseline question: this study considers 35 years of data. With few exceptions, the mountains of the Western U.S. have been there, experiencing snowpack and natural variations, for at least several hundred thousand times that long. This data covers something less than 0.00001% of the potential dataset.

Well? Is that enough? Can we justify any conclusions drawn from such a tiny sample? Can we say with any confidence what ‘average’ or ‘normal’ conditions are for snowpack in these mountains? The natural variations we know about include ice ages, glaciers and glacial lakes. Precipitation levels almost certainly vary wildly over thousands, let alone millions, of years. On what basis should we conclude that the snowpack should stay the same, grow, shrink or do anything in particular over a given 35 year period?

Enough. The monotonous similarity of these sorts of “studies” in their steadfast resistance to apply even a little basic science or common sense to their analyses tires me.

Author: Joseph Moore

Enough with the smarty-pants Dante quote. Just some opinionated blogger dude.

4 thoughts on “Science! Strikes Again: Saving the Theory from the Data”

    1. Thanks.

      That model fitting the data thing only runs one way: a while back, I wrote on an item appearing in SciAm, where the presumed return of farmland to forests as a result of the deaths of American Indians due to European murder and disease and the enslavement of Africans resulted in a 7 ppm reduction in atmospheric CO2, which caused the Little Ice Age. No mention that the subsequent increase in CO2 by 100 ppm seems not to have done much of anything. CO2 sensitivity is a big variable in all models, it’s mostly what you would be testing for – making predictions and measuring results – in order to get a good value. What happens instead is that minor changes that seem to map to high sensitivity are used, while major changes that don’t do much of anything are explained away: any day now, that huge upward swing in temperatures will happen! Just wait!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s