Well, not really. But it’s a little like it, as follows:
Yesterday, Agellius put a link in a comment to this article, which is about the same long-term ( as in 11,000+ year) temperature study I’ve posted about here and here. (Still have not come across a link to the study itself.)
Quick summary: A team compiled various temperature proxies, such as ice core gas concentrations and tree ring and sea shell growth in order to estimate temperatures world wide since the current interglacial period within the current ice age began about 11,300 years ago. This effort represents the best guess at how warm the climate was at various times since the ice sheets melted.(1)
This is fascinating stuff. You’d think, at least, you’d think if you were an educated amateur like me, that the results of such a study would become the base input of any climate model, that alleluias would ring in the climate modeling world, and everybody would drop everything to see how well their models *back predict* what *did happen*. Right? Because if your theory is that atmospheric CO2 is a major driver of earth’s temperature, and you build a model that predicts increasing temperatures as atmospheric CO2 increases, then – Voila! Here is a data set against which to test your theory! Atmospheric CO2 levels can be figured out from the CO2 trapped in ice cores. So – did lower levels of CO2 correspond historically to lower temperatures? Higher levels to higher temps? If not, then we’d presume your theory is *wrong* or, at the very least, in need of the kind of significant tweaking that tends to make the Baby Occam cry.
Science works from what has happened to predict what will happen. When I say: iron melts at 1,538°C, what I’m saying is that, back in the day, we melted some iron and, according to our fancy thermometers, it melted pretty consistently right around 1,538°C, and therefore, according to our theory of uniformitarianism, it should melt right around there if you try it again today. Or if I say: anything reasonably heavy and not too funny shaped that I drop from a reasonable height here on earth will accelerate at 32’/sec^2, all I mean is that that’s what has happened in the past when I dropped something heavy, like a metal ball, from a reasonable height, like the top of the tower of Pisa.(2)
The steps involved in fancier science are fancier as well, but it boils down to the same thing: as Feynman put it, first, we take a guess – we propose a theory that explains what we already see (the easy part). Then, we see if, using the theory, we can predict what we will see in some unusual or previously overlooked thing. The theory is proved, in the ancient sense of tested, by the match between the new observations and the predictions – the theory can be proved ‘false’, which in this sense means, effectively, unhelpful in predicting future observations. Or it can be proved ‘true’, which likewise means useful (or ‘skillful’) in making predictions.
It is absolutely fundamental to keep the stuff we know – the data or facts – separate from the things we’re guessing – the predictions or forecasts. They are not only not the same thing, but stand to each other as knowledge and hope, such that knowledge doe not change, but the hope can stand or fall.
With that in mind, look at this graph from the article suggested by Angelius and linked above:
There are a whole bunch of objections to be made here, (3) but we’ll focus on one; this graph not only combines data with projections as if they are the same thing, but even uses the exact same weird color scheme and scale so that, even if you think to look for where they pull the old switcheroo from facts to projections, you’d need a magnifying glass and psychic powers to figure it out. That scary-looking uptick at the end? That’s almost entirely *projections*. As the essay (no doubt sheepishly) admits:
It suggests that we are not quite out of the natural range of temperature variation yet, but will be by the end of the century.
Sooo – The actual data is within the range of ‘natural’ variation, but, if we kludge on the projections, *then* we can panic. It is as if someone took the body of an ape and sewed on a fish’s head – and then claimed to have found Aquaman.
Perhaps we should update the graph by projecting the next 85 years of this century based on what has happened over the first 15 years? You know, use the data instead of the projections, then make future projections based on the newly available data? If we did that, that whole scary-looking spike at the end magically vanishes! Only if the theory that has so far failed to predict anything correctly is, contrary to reason and experience, true, do we have that spike. Hmmm.
Finally, note the range of the anomalies.(4) Even if the model were, despite all evidence, true, the projected change from somewhere in the 19th century to the end of the 21st century is lightly more than half a degree centigrade. Even the worst case, granting a theory thoroughly discredited by that little thing we like to call ‘Reality’, would be way milder than the 6 to 9 degrees that gets tossed around by panic mongers. And, just a little tidbit: if thousands of years of warmer temperatures didn’t melt the icecaps starting back 9,500 years ago, it would seem we could stop worrying about that happening until some number of centuries sometime after 2100.
1. Not that they have completely melted yet – there are still a few glaciers and the Greenland and Antarctic ice sheets. 3.5 million years ago, those didn’t exist – that they do still exist is why we are still in an Ice Age. As far as we can tell, our current climate is an unstable ‘hiatus’ within a 3.5 million year (and counting!) age of ice sheets and glaciers and all-around unpleasant living conditions for civilized people. The ice sheets will most likely come back sometime between any minute now and a few tens of thousands of years – blink of the eye, geologically speaking. Astrophysicists think the current ice age will eventually end as the sun warms up as its hydrogen mix gradually gets leaner, until, in about 1 billion years, the sun gets hot enough – red giant hot! – to at least boil off the oceans and the atmosphere and maybe evaporate the planet entirely. Best case, the earth is a toasted cinder.
2. The science part in this is making careful, controlled observations – the iron used is pure enough, the temperature and thermometer are controllable enough, to get consistent results. Science is also a part of what we mean by ‘consistent results’ and ‘observation’ and so on. There are always definitions, theories and tools in science – applying them is how one makes ‘facts’. Facts, in turn, require an understanding of those definitions, theories and tools. As Mike Flynn points out, facts do not speak for themselves, but are rather part of a Greek chorus of science.
3. I especially like how we graph ‘anomalies’ rather than temperatures – this highlights how difficult it is to even imagine *a* number that represents the temperature *of the world* at any point. Already, we’re deep in theory: the ‘fact’ of the ‘anomaly’ is created by application of definitions, theories and tools we can’t see. There is nothing obvious about this graph. It’s also good to note that the entire range of the anomalies is about a degree and a half Celsius, while the thickness of the scribbles – which traditionally represent some measure of the uncertainty of the guesses – are almost a full degree themselves. The uncertainty is as large or larger than the claimed change in the anomaly. Finally,
Thermometre measurements only exist back to around 1860, so when climatologists reconstruct historical temperatures, they must use proxies. Tree rings, for instance, are useful because they are thicker during warm years when trees can grow faster.
Did they switch to thermometer data starting in 1860? If so, why? Why not continue to use the proxies, so we have a consistent measure? a scientist would want to be very clear on this point, not leave it to the reader to guess. Also, it’s curious to note that, once satellite measurements became available for the whole planet in the mid-’90s, the temperature stopped going up. Inquiring minds want to know: how does the proxy data look over the periods where we included or switched to thermometers and satellite measurements?
4. Perhaps the anomalies don’t map nicely to temperature? They don’t seem to, somehow, at least not obviously. What good would they be, then? This graph is less than helpful. .
Continuing the tree ring proxies would have resulted in declining temperatures, contrary to the thermometer record. Hence, “hide the decline” by blending the thermometer record onto the tail end of the proxy record. Recall, too that while warmer/colder temperatures may affect tree ring width, so does rainfall, shade, and whether a deer took a dump near the roots.
Thanks.. And that’s the other thing: Is that proxy study really claiming to determine global temperature (however defined) 10,000 years ago to within a half degree or so – from the likes of tree rings? When you mentally map the 1.3 degrees over a century rise that it the supposed source of all this panic against the slop inherent in such sloppy proxies, it vanishes in the noise – there could have been any number of century-long spikes up or down of the magnitude of a degree or two – they would be like minnows swimming through our cod nets, to keep the sea life analogy limping along.
This reminds me of the ice core graphs I’ve seen, where CO2 and dust concentrations are graphed out over time – the thing that jumps out is that, the older the date, the smoother the graph. They are quite jagged over the last few decades, but hill-like once you get back a few centuries. This suggests to the casual observer that the data is less certain the farther back you go – there are more holes, more conflicts, more fuzzy edges. Nothing wrong with that, that’s the real world for you. Just own it.
Those precisions are usually in reference to the parameter estimations not to the temperatures themselves. Confidence intervals are much narrower than prediction intervals. Also, they take no account of accuracy, only precision. You can have a very precise estimate of a totally wrong value, much as an archer may group his arrows tightly but far from the bull’s eye.
Thanks. What is the relationship between an ‘anomaly’ and a ‘temperature’ as used in such studies? I’m assuming it can’t be very direct – in my dimly lit way, I assume the anomaly arises out of the statistical analysis of the data, and so is not data (temperatures) but rather a mathematical property of the data under a certain regime of analysis. I want – and I know I can’t have it – some straight-forward way to relate the anomaly to temperatures. What I’m missing in my head (well, at least some of what’s missing in my head) is the understanding of what, precisely, such an analysis is telling us – which I’d need to understand what the graph is telling us.
In 5 years, all but one of the kids will be through college – I like to imagine I’ll have more time for math then. Yea, right.
The whole process of moving from data that’s too messy to get much out of by inspection through the various steps required by mathematical analysis, and the nature and degree of the resulting uncertainty – I’m fuzzy, to put it kindly. But I can at least note the logical leaps from one set or type of thing to another set or type.
Suppose you have two locations, say Anchorage AK and New Orleans LA. Each has a different characteristic temperature. In order to distinguish trends, one calculates the residuals. (Why climate dudes call them ‘anomalies’ I don’t know.) The residual of X is (X-mean). So if the temperature is the average value, the residual/anomaly is “0”. If it is one degree above average, the residual is +1, and so on. This enables us to compare the trend at Anchorage to the trend at New Orleans independently of the actual temperatures.
Cool, thanks. That’s what I thought was going on on some level, just wasn’t sure that’s what they were charting (although in retrospect, what else could they have been charting? the way people talk, it is as if there’s this magic single number called Global Temperature, which is nonsensical.)
But this raises the question of weighting – until the advent of global satellites, you’d have been determining residuals according to actual physical thermometers, which cannot be assumed to be evenly or representatively distributed…
http://judithcurry.com/2015/02/22/understanding-time-of-observation-bias/
is useful