# On Method

Following up on the post On Data, with another point. Again, I’m mostly thinking out loud here, trying to give some vaguish ideas more solid form.

Sometimes, using science, we want to know something that can be addressed directly: how tall somebody is, what is the melting point of water ice under normal conditions, how much volume a pound of pure iron occupies at room temperature, and so on.* In such cases, we pull out the tape measures, the thermometers and the displacement tanks, and have at it.**

But this is often not the case, for practical and moral reasons. How far away are the Pleiades? We can’t just take out our 100 light year tape measure; we can’t even take out our surveying equipment and triangulate (at least not very simply, although in the end, that is pretty much what we do). How long can an unprotected man survive in the vacuum of space? We don’t just throw a few dozen naked guys out the ISS airlocks and grab a stopwatch. Apart from the logistics, it isn’t morally permissible (although the speed at which the moral basis for such restrictions is being cast aside is truly breathtaking).

In these more complex cases, we need to get more fancy in our methods to work around the difficulties in a direct approach, not do anything immoral, or both. Two points, from my ‘educated layman’ approach to assessing the claims of science:

1. Complex approaches almost invariably add uncertainty beyond that in the theoretical simple approach – the results of a multi-step method is unlikely to be as certain as the results of a more direct method.
2. No matter how convoluted the approach to getting an answer may become, it is essential to keep in mind what the direct approach to the question would look like, to make sure that the logic of the direct approach that the complexity is trying to stand in for doesn’t get lost.

Let’s take measuring the distance of stars. The simplest, most direct way to determine the distance to something you can see far off is by measuring parallax – the apparent movement of the stars when viewed from different locations.

Turns out the most direct practical approach to determining distances to stars is not very easy. It’s only in fairly modern times (1838, by Friedrich Bessel)  that it was successfully done. For all but the closest stars, up to about 1,000 light years away, this approach doesn’t work – the tiny amount of parallax falls below the accuracy of the instruments. But the Milky Way is about 100,000 light years across, meaning distances to only the comparatively microscopic number of stars very ‘near’ us can be measured with parallax. The vast majority of stars in the Milky Way are tremendously out of range for parallax, not to mention any objects outside our galaxy which are way, way, WAY farther (technically speaking) than stars in our own galaxy.

So, how do we (there’s that ‘we’ again!) figure out how far away something like, say, Andromeda, is? We start with what we know pretty well: the distances of nearby stars determined by parallax. Then it starts getting fun. To finish this post in my lifetime, we’ll just talk about the method used for the next closest objects: standard candles.

Turns out if you know the absolute brightness of something, say, a certain type of star, then you can tell how far one of them is by simply observing the apparent brightness and applying the inverse square law. If a star of a certain class has an absolute brightness of 1 at 1 parsec distance, and a star of that class is observed to have an apparent brightness of .25, then, by the magic of math, we know it is 2 parsecs distant.

At least we strongly suspect it is, so strongly we take it for granted, unless somebody can come up with a good reason not to. What is important to note here is that the usefulness of standard candles is the result of 1) getting accurate distances to some of them that lie within 1,000 light years of us using parallax; 2) measuring their luminosity correctly; and, most of all, 3) making sure that the class to which they belong is clearly defined so that we can say for sure whether an observed star is or isn’t in the class.

To sum up: we try to measure the distances of stars as directly as we can. Then we devise a surrogate method using standard candles that allow us to estimate the distances to stars and other objects we cannot measure directly. (And from there, we develop a bucket of other surrogate measures for even more tricky/farther away object.)

The point here: I think it is critically important to keep in mind what it is we are trying to measure, and be aware of when we are measuring directly and when we are using surrogates. Some surrogate methods are logically very tight, such as standard candles, where the assumptions and steps stand up to intense scrutiny very well (although there is this – eternal vigilance!).  Others, such as asserting we are measuring something about the human population at large by studying a group of WEIRDs, couldn’t stand up to the lightest breeze.

Finally, I like to state the method of approaching a science question assuming the most direct possible measurements can be made – that way, I can be more sure to see when we’re not using them. It is ubiquitous in the soft sciences to pull a switcheroo – claim to be measuring one thing (bias, racism, and so on) using surrogate measures that cannot be made to yield such information. But even in more firm sciences, the temptation to try to get statistical blood from a data set of turnips is a constant and ill-resisted temptation.

* in a comment on the last post, TOF points out that even such seemingly straight ahead cases tend to get messy – instruments and methods in the real world refuse to behave as cleanly as theory wishes they would. Our level of certainty in any given measurement is never 100% – truth and accuracy are not really the same thing.

** and, after a few dozen or thousand measurements and maybe some fancy ‘sum of least-squares’ analysis I may have understood years ago, we come up with a value we can live with – everything in science is done to a ‘close enough’ standard – because there isn’t any other in a contingent world.

## Author: Joseph Moore

Enough with the smarty-pants Dante quote. Just some opinionated blogger dude.

## 6 thoughts on “On Method”

1. I’m flattered, but I’m just a amatuer futzing around the edges – you actually know what you’re talking about.

I got to read your stuff more often – I’ve only read a couple of your guest posts on Dr. Brigg’s blog up to now (missed this – was it there? )

1. theofloinn says:

This is a frequent problem. You cannot measure the tensile strength of a steel ingot; but you can measure its Rockwell hardness and the two measures are related. Ditto, measuring viscosity for degree of polymerization, radiation backscatter for density of coal in a bunker, et at.
If the surrogate is related to the desired measure as by Y = f(X)
then the variability of the surrogate Y is
σ(Y) = f'(μ(X))σ(X) where f’ is the first derivative of f evaluated at the mean of X.
With each step away from the original metric, the statistical uncertainty of the model will increase.
An example is here: http://tofspot.blogspot.com/2012/04/lets-be-precise.html

1. There you go with the math! (Hey, at least I recognize all the symbols…) This quantifies the uncertainty added to the results by use of a surrogate method where the surrogate’s relationship to the target metric is expressed as a function of the target metric? That’s beautiful, and says what I was afraid to say: that *every* methodological step away from direct measurement adds to the uncertainty. I was hesitating because I was imagining a case where the difficulty of a particular direct measurement seemed to allow for the possibility of greater accuracy through a surrogate – now, i notice the logical problem: it’s not logically possible for a surrogate method to yield better results that the best results you can obtain directly, because how can ‘better’ be determined without direct measurement?

I’ve often thought that astronomers and astrophysicists had a cushy job in that, if they’re off an order of magnitude or so, nobody dies. Industrial engineers screw up, and people die.

You might appreciate this: I worked at my dad’s sheet metal fabrication shop when a youth. I could eyeball 2 or 3/100th of an inch using a tape measure; the guys that were good could get a lot closer than that – sub 100th of an inch. We verified using Vernier calipers, which were suppose accurate beyond human eye resolution. We were almost certainly more confident than the accuracy of our equipment would realistically allow – but our customers loved us, because they almost never needed anything that close, but by habitually shooting for it, we were *way* within tolerances for a semi-precision shop.

Formative experience.