Science, Medicine & Me (or You)

One of the problems, or challenges, if you prefer, with medicine is that few people who become doctors do it because they love science. They become doctors typically because they want to help people, a very fine reason. But this means that when the situation calls for a more scientific examination of the evidence before them, or of the value or pertinence of this or that finding or practice, your doctor is likely operating at a disadvantage.

Ultimately, tradition, law, and their insurance companies all but force them to stick to conventional, approved approaches to everything. In general, this is not a bad thing – you certainly don’t want your doctor freestyling it when it is your health on the line. If doctors everywhere treat a given constellation of symptoms in a particular way, that would probably be the place to start. But it’s not in itself science.

I recall a couple decades ago that every time we took a kid to the pediatrician – and we loved our pediatrician – she would feel compelled to advise us not to let the little ones sleep with us, family bed style. I can just hear her teachers, or her professional bulletins or her insurance company telling her it was best practice, that kids die every year smothered in a bed with an adult, and just don’t do it. Now, of course, pediatricians in the US (it was never policy in the rest of the world) have come to realize that the benefits to both the child and the parents of having the child in the parent’s bed far outweigh the miniscule risks. Such risks effectively disappear if the child and parents are healthy and the parents sober, while the sense of comfort and attachment gained by the child and the extra sleep gained by the parents is a serious win-win.

Now a scientifically inclined person might ask about the data and methodology behind the claim that hundreds of thousands of years of natural selection had somehow gotten the whole baby sleeps with parents thing wrong.  Maybe it has – but the issue should have at least been addressed. But it wasn’t, in America, at least up until a few years ago. So every American pediatrician was expected to toe the official line, and our doctors did. For, what if you hadn’t advised parents against ‘co-sleeping’ and a baby died? You’d be asking for a lawsuit.

Another example I’ve mentioned before is salt intake. This one is a little different, in that for some people, there seems to be a fairly strong correlation between salt intake and blood pressure, so at least being concerned about it isn’t crazy.

For most people, however, there is little or no correlation between salt intake and blood pressure, at least within realistic levels of consumption. In one of the earliest studies, rats were given the human equivalent of 500g of salt a day, and their blood pressure shot right up! But humans tend to consume around 8.5g of salt a day. Um, further study would be indicated?

The science would seem to support some degree of caution regarding salt intake for people with high blood pressure. Instead, what we get are blanket recommendations that everybody – everybody! – reduce salt intake. It will save lives! Medicine cries wolf. People learn to ignore medical advise. Further, Medical Science! fails to consider what it is asking for – a complete overhaul of people’s diets. Few real people are going to do this without serious motivation. Wasting ammo on a battle not worth winning.

Again, if doctors were essentially scientists attracted to medicine for all the opportunities for scientific discovery human health presents, such errors and poor judgement might be more limited. But doctors became doctors to help people, not to debate scientific findings with them. They want to DO something. Thus, conventional medical practice is full of stuff to do for every occasion. Whether or not there’s really any science behind it is not as important, it seems, as developing practices to address issues so that medicine itself can be practiced. Clearly, for the average doctor, having *something* to do is better than having nothing to do, even when that something isn’t all that well supported or even understood.

These thoughts are on my mind because of all the trouble I’m having with blood pressure medicines at the moment. Since there’s an obvious trade-off here – somewhat higher blood pressure with a higher quality of life versus acceptably lower blood pressure but with lower quality of life – I decided I needed to do a little basic research. Here’s where I’m at after a very preliminary web search:

What I’m looking for, and have so far failed to find, is a simple population level chart, showing the correlation between blood pressure and mortality/morbidity. Of course, any usefully meaningful data would be presented in a largish set of charts or tables, broken out by such variables as age, sex, and body mass index.  But I would settle at this point for any sort of data at all, showing how much risk is added by an additional 10 points or 10 percent, or however you want to measure it, above ‘normal’ blood pressure.

For example, I’m a 60 yr old man. Each year in America, some number of 60 year old men drop dead by high blood-pressure-related illnesses. OK, so, base data is at what rate do 60 year old men drop dead from high blood pressure related diseases? Let’s say it’s .1% (just making up numbers for now) or 1 out of every 1,000 60 year old men. That’s heart attack and stroke victims, with maybe a few kidney failures in there, severe enough to kill you, a 60 year old man.

Now we ask: what effect does blood pressure have on these results? Perhaps those 60 year old men running a 120/80 BP die at only a .05% rate – one out of every 2,000 drops dead from heart attacks, strokes or other stray high BP related diseases. Perhaps those with 130/90 (these results and possible ages would be banded in real life most likely, but bear with me) die at .09% rate, while those with 150/100 die at .2% rate, and those above 150/100 die at a horrifying 1% rate, or 10 times as much as the old dudes with healthy blood pressure. These numbers would all need to average out to the .1% across the population, but a high degree of variability within the population would not be unexpected.

Or maybe something else entirely.  But I have yet to find such charts. I’ve found interesting tidbits, like FDR’s BP in 1944 when his doctor examined a by that time very ill president “was 186/108. Very high and in the range where one could anticipate damage to “end-organs” such as the heart, the kidneys and the brain.” So I gather BP in that range is very bad for you, or indicates that something else very bad for you is going on (FDR had a lot of medical issues and smoked like a chimney).

Then there’s this abstract, suggesting in the conclusion that my quest is going to be frustrated:



Quantitative associations between prehypertension or its two separate blood pressure (BP) ranges and cardiovascular disease (CVD) or all-cause mortality have not been reliably documented. In this study, we performed a comprehensive systematic review and meta-analysis to assess these relationships from prospective cohort studies.


We conducted a comprehensive search of PubMed (1966-June 2012) and the Cochrane Library (1988-June 2012) without language restrictions. This was supplemented by review of the references in the included studies and relevant reviews identified in the search. Prospective studies were included if they reported multivariate-adjusted relative risks (RRs) and corresponding 95% confidence intervals (CIs) of CVD or all-cause mortality with respect to prehypertension or its two BP ranges (low range: 120–129/80–84 mmHg; high range: 130–139/85–89 mmHg) at baseline. Pooled RRs were estimated using a random-effects model or a fixed-effects model depending on the between-study heterogeneity.


Thirteen studies met our inclusion criteria, with 870,678 participants. Prehypertension was not associated with an increased risk of all-cause mortality either in the whole prehypertension group (RR: 1.03; 95% CI: 0.91 to 1.15, P = 0.667) or in its two separate BP ranges (low-range: RR: 0.91; 95% CI: 0.81 to 1.02, P = 0.107; high range: RR: 1.00; 95% CI: 0.95 to 1.06, P = 0.951). Prehypertension was significantly associated with a greater risk of CVD mortality (RR: 1.32; 95% CI: 1.16 to 1.50, P<0.001). When analyzed separately by two BP ranges, only high range prehypertension was related to an increased risk of CVD mortality (low-range: RR: 1.10; 95% CI: 0.92 to 1.30, P = 0.287; high range: RR: 1.26; 95% CI: 1.13 to 1.41, P<0.001).


From the best available prospective data, prehypertension was not associated with all-cause mortality. More high quality cohort studies stratified by BP range are needed.

Ok, so here is some information. Let’s chart it out as best we can. Here is the diagnostic banding used by the medical profession here in the US. I note it is unadjusted for age or anything else, which is fine, got to start somewhere:

  • Normal blood pressure – below 120 / 80 mm Hg.
  • Prehypertension – 120-139 / 80-89 mm Hg.
  • Stage 1 hypertension – 140-159 / 90-99 mm Hg.
  • Stage 2 hypertension – 160 / 100 mm Hg or higher.

The meta-study above further divides the prehypertension range into a high and low as follows:

  • Low Prehypertension – 120–129/80–84 mmHg
  • High Prehypertension – 130–139/85–89 mmHg

This particular study does nothing with Stage 1 and 2 hypertension – too bad. But it’s mostly those prehypertension numbers I’m worried about personally. Anyway, here’s what we’ve got so far.

BP graph 1

We will here ignore what looks like a bit of statistical hoodoo – we’re blending different studies, calculating p-values and confidence intervals to the combined results – um, maybe? Perhaps if Mr. Briggs or Mr. Flynn drops by, they can give a professional opinion. Me, I’m just – cautious. So, what’s this telling us?

If I’m reading it correctly – not a given by any stretch – we’ve determined the total relative risk or RR (a term of art, but sorta means what it seems to mean) at the base state and the three partially overlapping prehypertension states based on both systolic and diastolic BP ranges, both on a ‘All Causes’ and a cardiovascular diseases basis. What this appears to say is that a meta analysis of 13 studies of nearly a million people over several decades shows that your risk of illness from any cause increases .03 RR points, or 3% over the base value, if your BP runs a little high, but that your risk of cardiovascular  disease increases 32%. Which doesn’t exactly make sense if one assumes cardiovascular diseases are part of ‘All Causes’ – and why wouldn’t they be? – unless slightly high BP somehow reduces the sum of all other risks. Also, the analysis run over the two sub-ranges of low and high prehypertension do not look like they could possibly add up to the values over the entire prehypertension range – which could well be an artifact of the statistical analysis used. If that is the case, does not logic indicate that the results are quite a bit less certain than the p-values and confidence intervals would suggest? Again, I am very much an amatuer, so I could be a million miles off, but these are the questions that occur to me.

The critical piece missing for my purposes: what scale of risk does the RR here represent? A 32% increase in a .01% chance of Bad Things Happening is hardly worth thinking about; a 32% increase in a 20% risk of Bad Things is a whole ‘nuther kettle of fish.

I’m about researched out for the moment, will continue to google around for more information when I get a moment.


Is it obvious enough that I’m a LITTLE COMPULSIVE? Just found this, at the Wiley Online Library: 

BP chart 2

Don’t know if this is annualized (per year) numbers, or a total across the entire age range, that ‘all significant’ part worries me a little, but: this seems to be saying that I, a 60 year old man, could expect a 1.8% chance of cardiovascular disease (however that’s defined) if my blood systolic blood pressure falls between 120 and 139, or, more important to my purposes, minisculely more risk than if my BP was the more desirable 120.

This is in line with what one would expect from the data in the previous chart.

Still a lot more work to do here.


Friday Flotsam

1. Zuckerberg. Ah, Zuckerberg. Not a big fan of armchair psychology unless it’s me that’s doing it. So, grain of salt and all that.

Over the years, have run into a number of people in my position: working with techies without being a techie. People in sales, PR, management, even a retired corporate psychologist. It’s remarkable how the discussion will eventually, usually pretty quickly, get around to the same issue; the blindness of successful techies to how normal people think and react. Stereotypes get that way because they’re so often accurate.

If I have a big Theory of Life, it might be described as Filter Theory: with greater or lesser intent, people are sorted and assigned roles according to filters. Nobody becomes a cop, for example, unless he can tolerate lots of rules and bureaucracy and don’t shy away from the threat of violence. The vast majority of people, it seems to me, would not make very good cops, at least according to the current job description. We find common denominators across all sorts of otherwise different people if they share a profession. (1) Nothing too profound here, just an observation to keep in mind.

Image result for zuckerberg
Our once and future robot overlord. 

Nobody can become successful in computer technology unless he can tolerate sitting in front of a screen for hours every day and stay focused on increasingly arcane minutia. People with a high need for human interaction need not apply. In fact, finding human interaction baffling or unpleasant would tend to drive people toward careers where they can be successful without having to deal too much with other human beings.

Further, there are kinds of insanity that result in sleeping in a cardboard box or padded cell; there are also kinds that result in becoming CEO or sales leader. In the case of tech, there are many, many really good guys who are aware on some level that they’re not very good at picking up what other people are feeling or thinking. These folks tend to be that sort of shy geek that is easy to love – and who rarely rises much in the hierarchy.

Then there are those who, if not out and out sociopaths, are at least blissfully unaware of how other people think and react. They just assume other people are stupid or ignorant. They are confident that things would go so much better if only they were in charge. In a tech environment, these people tend to become management. Sometimes – woe to us! – they even come up with a good enough idea that they found a company or 3.

Thus, we get the spectacle of Zuckerman. I believe he really, truly does not get how hopelessly arrogant and frankly stupid he looks to normal people. The most terrifying aspect: he’s rich enough to have gotten away with it so far. His ego is probably utterly impenetrable. He is absolutely sure the only problem here is that everybody else is stupid.

I passionately hope somebody finds a way to put him in jail for a year or two. That’s about the only hope we have of getting through to these fools. It’s a slim hope, but it’s about all we’ve got.

2. A discussion of this article took place on this blog. Here we have Science! in all its glory: some sample of people in nations around the world are asked, using a variety of ‘instruments’ no doubt, about how ‘religious’ they are and how ‘happy’ they are. Then, tossing all this ‘data’ in a blender, we are called to conclude that the more religious the people in an area are, the more unhappy the people in that area will be.

Where to even start? Note first of all that it’s not claimed that the it’s same people – in other words, one set of people might be very religious and happy, while another set, let’s say a bigger set, is mildly irreligious and very miserable. The average – whatever that might mean! Average of what, exactly? – might show relatively high religiosity on average and relatively high misery on average, but miss entirely *who* was happy and who was miserable.

Really, too much stupidity to be sorted through. Let’s landry-list this thing, at least the high points:

  • Reification. To plot the graphs shown, you would need *numbers*. Happiness, sadness, religiosity are NOT in ANY WAY numerical. Nobody is 0.7 happy, nor 28.334 sad, nor 87% religious. Do not pass go until you understand this. It is simply nonsense to assign numbers to responses on a poll and act like you can then add them up and perform math on them. Simple and complete nonsense. Cooking up an ‘instrument’ that forces people to give numerical answers doesn’t magically make the thing numerical.
  • Polls. Undefined terms. So some undergrad needing extra credit shoves a poll into somebody somewhere who has time to answer polling questions, and asks something like: on a scale of 1 to 10, how happy are you? Somebody says 8. Somebody else says 6. Yet another person says 3. Well? Who is happier? WE DON’T KNOW!!!! Happiness is not numerical, and, even if it were, 3 people will each have his own unique and possibly mutually exclusive ideas of what happiness means.
  • Self reporting. In America, one routinely asks ‘how you doing?’ and routinely gets a reply such as ‘fine’. In Italy, nobody asks how you are doing, because the answer will be a litany of ills. Yet we assume without any objective check that the American who says 8 is really twice as happy that the Italian who says 4?
  • Cultural differences. See above. Even apart from individual differences, some cultures consider themselves happy, others consider it bad form to tout one’s happiness. Yet all answers are treated as the same.
  • Religion. The poll assumes that Calvinism is a religion in the same way Islam is, or Hinduism, Buddhism or every flavor of Animism is. Just no. The concept of a devout Animist is absurd. Calling Buddhism a religion in the same way Lutheranism is a religion is absurd. Within each subset, similar problems are revealed by a moment’s reflection: Catholics – a group I know fairly well – consist both of those who were last in a church when baptised and will next be in a church for their funeral, who couldn’t give an account of what the Church believes, who nonetheless see themselves as devout, and those who attend daily Mass and study the catechism, who nonetheless feel themselves but meager Catholics. We count them all the same?
Image result for happy baby
This baby is EXACTLY 9.7365 happy. EXACTLY! It’s SCIENCE!

And so on, across problems with language – do the terms mean the same things across all languages? – sampling questions, consistency, methodology – non of which matters in the least because HAPPINESS AND RELIGIOSITY ARE NOT NUMBERS.

If you call yourself a scientist or even a supporter of science, and fell for this, you are an ignorant fool. Not to put too fine a point on it.

3. Looks like we’re done with the rainy season here in Contra Costa County and perhaps the state as a whole. Last storms are petering out in the eastern mountains, and nothing else is forecast. We typically get very little rain after March.

I got a weighted average of 72.26% (speaking of ridiculous claims of accuracy – but hey, it’s math!) of average rainfall over the 30 rainfall gauges of the Contra Costa Flood Control District. Last year, we had 178% even over 29 gauges. Over the last 2 years, according to my highly suspect but probably about right methodology, we got 125% of average rainfall.

So? I don’t know, but it seems to me we should probably not have to worry about water supply now, except the long-term worry about how we capture, distribute and use it. How about a 50 year project to improve water capture, reduce transportation system loses, examine if we’re using water wisely and returning a large chunk of the Delta to wetlands? Instead of shrill panic? A man’s gotta dream.

  1. A favorite example from childhood: read an article, probably in Sports Illustrated, where a guy claimed to be able to tell whether a professional American football player played offence or defense just by looking at his locker: offensive players would have all their stuff neatly hung up and organized; defensive players would just stuff their gear or pile it on the floor. Why? because offensive players who reach professional level have to be able to execute a very specific and detailed plan for each play, while defensive players are filtered by their ability to disrupt those detailed plans. In the article, an exception was pointed out: there was an offensive lineman in this particular locker room whose gear was piled on the floor. A moment of interrogation revealed he’d been a defensive lineman until switched to offense in the pros.

Links. Science! The Usual.

Image result for forbidden kingdom
It is said, master and student, walk their path side by side… to share their destiny, until their paths go separate ways.

Don’t want to start out too critical of what very well might be legitimate efforts to understand the brain and how people make decisions, but The Brains Behind Behavioral Science article from a mag called Behavioral Scientist seems to offer observations about as profound as Lu Yan’s comments to Jason Tripidikas in Forbidden Kingdom referenced above, but without the intention of making a joke. For example:

Crucially, by predicting—instead of passively registering—our environment, predictive coding allows our brain to conserve cognitive resources and guide our perception and action in a fast and efficient way. But this also means that what our brain notices and attends to is heavily determined by what we already know.

Ooooh-kay. In English: we tend to look for and notice familiar things in familiar environments. Since that would be what makes a familiar environment familiar, I’m not sure we got anywhere here.

The major contention, OK as a basis of scientific exploration as long as accompanied by awareness of the limits of such a view, is that the mind (human behavior standing in, in this case) is the way it is because the brain is the way it is. As a working hypothesis, such a notion might allow something to be discovered about the relationship of thought and volition to the physical state and capacities of the brain. Not bloody likely, but maybe. Such a view does not allow one to pass metaphysical go, nor collect 200 Kantian thalers, real or otherwise.

The essay continues:

From this perspective, it is easy to see how predictive coding explains our tendency to spot confirming evidence more readily than disconfirming evidence. And because most of these predictions are performed unconsciously, we are unaware of how our prior beliefs blend with new information from the real world. When it comes to explaining cognitive quirks like the confirmation bias, the brain is basically an engine of prediction.

That word – easy – I don’t think it means what you think it means. Also, the mind and perhaps the brain boggles at the notion of demonstrating the brain’s nature as a predictive engine. Basically, thoughts as an expression of brain activity is a tricky concept, to say the least. That materialists want it to be so doesn’t make it any less tricky.

By using neuroscience to prune behavioral concepts to relevant brain substrates (! – ed.), we can rationalize the zoo of biases. The outcome would be a simpler framework, with a map of behaviors observed in different situations linked to core cognitive functions. Such simplification has already begun and could both help communication among behavioral scientists and lead fundamental and applied research in new directions.

Our suspicions are confirmed. “Rationalize a zoo of biases.” Hmmm. Note that the writer is a behavioral scientist (whatever that might be) expressing her hope that the “zoo” – the diverse, animated collection of biases that seems to be her subject matter – can be rationalized, by which she clearly means organized in a more understandable way, by use of simple principles to be discovered through neuroscience. Note that this hope is expressed as a simple fact: “we can rationalize…” not as the more sane and scientific “we just might maybe be able to rationalize…” Nope, by applying the same sort of neuroscience by which we have gained rich insight into the inner spiritual life of dead salmon, we will – not may, not might – we WILL “prune behavioral concepts to relevant brain substrates.”

She gives this example:

For instance, by studying the way brains change as we age, neuroscientists can help address one of the major challenges for the next generation of behavioral scientists: how to target behavioral interventions for the vastly different brains of people of different ages, cultures, and socioeconomic levels.

Apart from the mere woolly incoherence of the above quotation, I for one would really not want the sort of thinker who could emit such a thought doing any sort of “behavioral interventions” on me under any circumstances.

It gets worse:

To assess differences among individuals, one objective alternative is “neural indexes.” Neural indexes are brain signatures of specific behaviors. Modern neuroscience has demonstrated that we can now use neural indexes to spot behavioral biases in different populations. Many cognitive biases (like risk aversion, the endowment effect, or framing effects) have already been reduced to specific brain structures or networks, enabling neuroscientists to expand the samples to people of different ages.

Aaaaand – the reference is a link to yet another fMRI study. TL;DR much past the pretty pictures. I will give them this: in the opening paragraphs I did read, the researchers use the word ‘suggests’ to describe certain much-to-be-hoped-for conclusions. Very consistent with proper scientific restraint in the face of the massive, hulking, shadow-casting unknowns that haunt the scientific mind (even one as modest as mine) when contemplating what is being claimed.  Contrast this with the casual confidence mentioned above. I merely note that unless some breakthrough has happened in the last 2 years that I’ve completely missed – unlikely – fMRI studies make phrenology look hard-science-y by comparison. Dead salmon, and all.

So perhaps some restraint would be in order, a little shadow of doubt?

Moving on, saw this on Twitter, I think. It seemed appropo:

Yet, here’s another Twitter grab (I must figure out how one embeds these things!)

Psych diretion

See here for my basic take on the often desperate looking attempts to distract people from the ongoing fraud that is sociological and psychological ‘research’ – poorly defined questions researched via dubious protocols and never replicated are published as ‘studies’ – that then, as the writer above notes, become the basis of public policy and popular culture.

(This reminds me – there’s a blog draft in the folder where I trace a particularly egregious example of ‘nothing to see here, citizens, move along’ through its permutations over time, where a study that had very publicly been used to beat conservatives was shown to actually have found the exact opposite conclusion – and so now needed to be poo-pooed into dissipating vapors. Need to finish that one…)

Now on to cheerier news:

Here is updated the story of honeybee hive collapse, a cautionary tale about needing to understand the problem before panicking and formulating drastic solution. This is perhaps a good one to point out for my own sake, since I failed to think it through myself, and thus missed the obvious point: honeybees are livestock, animals domesticated, bred and cared for by people. ‘Wild’ honeybees, such as the hive we used to have in our front yard, are really feral – their ancestors escaped at some point from domesticated hives first brought over by English settlers 3 or 4 centuries ago.

Thus, the solution to hive collapse is not to be found, generally, in improving the natural environment, but in improving the applicable animal husbandry. And so it has happened: if hive collapse is reducing honeybee populations by up to 40%, then apiarists are going to breed more of them to make for it – because bees are raised to pollinate crops and produce honey.  As a bee farmer, I’m going to do what I can to have the right numbers of bees available for my business.

So we can pretty much stop panicking over hive collapse. Keep an eye on it, just don’t panic.

Finally, here’s a cool picture related to a recent blog post here:

While evil never sleeps, and there’s plenty wrong with the world, it serves no positive purpose to ignore real gains in the material basis for general human happiness. Real, concrete problems correctly understood can call forth real, concrete solutions that actually solve something – this chart is, I think, a monument to just such thinking. But focused problem-solving won’t bring the revolution any closer, and just might cause it to be postponed indefinitely – so it must be avoided and ridiculed at every step in the eyes of certain interests.

Friday. Link. Graph.

A. Good story here from Calah Alexander: Why you should let your kids take risks — especially when they might fail.

I’ve said that I’d never let my kids try a 10-day  (unsupervised European trip – ed) in college, because what if what could have been for me comes true for them? What if they get lost, or mugged? What if they make a poor decision, choose the wrong stop, and get stranded outside an airport in a blizzard? What if they need help and can’t find it?

That one major snafu on our 10-day happened at the end, when we missed our flight back to Rome because we got off the train at the wrong stop. The airport in Brussels wouldn’t let us spend the night inside, so we huddled against the building instead, trying to stay out of the snow. The only thing we had to eat was a backpack full of Cadbury chocolates that my roommate had gotten in London.

As a parent, this story is terrifying. But it’s one of my favorite memories. We made it back to Rome cold, tired, sick of Cadbury, but alive and newly aware of our own resilience (and of the importance of navigational skills).

Ironically, protecting our kids from the pain of failure is itself a failure. It’s failing to let them experience the life we know is coming at them, the life we can’t protect them from forever.

Real choices matter to the kid, are supported by the family, and have real consequences. Leave out any of those three things, and the choosing is an illusion.

One final thing to add: kids also need to see adults sticking with the results of their own decisions. If mommy and daddy are running away – from their responsibilities, their spouses, their own kids – it becomes pretty much a given that the kids will grow up into bitter, whiny irresponsible brats. We wouldn’t want that to happen.

B. Another chart showing something or other:

It’s from Pew, whose methodology is both widely respected and, to give them the benefit of the doubt,  hopelessly flawed. In general, unverified self reporting by the  sort of people willing to take polls, with no concern wasted considering if the responder is at all motivated to tell the truth. (1) The questions tacitly assume that the world really does fall into convenient polar positions on virtually every subject. Which would be really, really convenient – for pollsters. So don’t give Pew polls much weight, in general.

By happenstance, about the same time I saw this I read a quip somewhere, to the effect that ‘Sir,’ Ma’am,’ and ‘Thank you’ will get you farther than a bachelor’s degree. Had to wonder: what’s the overlap between those red bars above and people who would nod at the folk wisdom of that quip? I’d quibble that a bachelor’s in something real PLUS the proper use of sir, ma’am and thank you is the real winning strategy. Nevertheless, with Pew, is often not difficult to see which of the two either/or points of view they’re hammering the world into they want us to consider enlightened.

  1. I’ve wondered since the election about the reported 8% of blacks who voted for Trump. I believe the number was based on exit polls. Now, imagine, in the general atmosphere of the last election, if a black person would feel completely comfortable telling a stranger with a clipboard that he’d just voted for Trump. Not saying one way or the other about what the results show – just that the method used is ignoring a pretty big potential issue when it fails to account for social pressures, or just assumes they cancel out.

C. Something stupid for your possible amusement:

Something about rabbits and chickens, creatures with largely unearned reputations as pacifists, going all Wild West there’s-a-new-sheriff-in-town that cracks me up a little.  One struggles a little coming up with the proper Darwinian just-so story that explains such odd behavior away.  Why are the chickens not content to let the rabbits kill each other if they want to? Have they adopted them, somehow?

D. Apologies. This is plain stupid. This is what an adolescent sense of humor,  + <45 seconds of  web searching  + <10 minutes of  MS Paint will get you:

Female lawmakers ‘bare arms’ in sleeveless attire to support new House dress code

Bear Arms

Curse of the Pyramids

(Thoughts not yet fully formed follow. Like that’s anything new.)

When diagramming out some issue using a pyramid, we are invited (if not forced) to think hierarchically, in terms of a foundation, middle stories built on that foundation, and a crowning achievement/reward at the top:

We have food pyramids,

USDA Food Pyramid
If you carb-load, then eat your fruits and veggies and top them off with a cheese burger, you have proved your virtue and may then have sweets! 

Maslow’s Hierarchy,

Image result for maslow hierarchy of needs pyramid diagram
“Breathing, Food, Water, Sex, Sleep, Homeostasis, Excretion” – Hmmm – which one of these is not like the others? (Homeostasis in this context is a largely redundant catch-all for all the others – except one.) 

Management hierarchies:

CEO Pyramid
I think this is a joke. I hope it is a joke. At the very least, except for a few very highly specialized businesses, it should be way wider at the bottom, narrow rapidly and come to a pointy-point. 


Even hierarchies of disagreement:

Image result for disagreement  pyramid diagram
So, we’re to base our disagreements on name-calling, and then work our way up through ad hominem past gainsaying and then finally arrive at something like reasonable discussion? Seems an unlikely progression. 

A moment’s reflection should reveal, I think, that except when applied to construction of large monumental structures, such pyramid thinking is very unlikely to apply to anything in the real world. Imagine, for example, a pyramid describing a bee hive:

Bee Hive Hierarchy

Or maybe:

Bee Hive Hierarchy II

Does either one of these make any sense at all? Of course not. Placing these relationships in a pyramid all but forces us to assume a hierarchy that isn’t there. Even the names – queens, drones, workers – are blatant anthropomorphizing. The queen isn’t commanding the workers any more than the workers are enslaving the queen.

In a similar way, all the examples given are nonsense. Our food habits aren’t hierarchical – we don’t build upon a base of carbs to support an apex of sweets and fats. Maslow’s diagram hides an important truth: that it’s belief in and desire for the good that often motivates people to accept less fulfillment or show no concern for lower levels, sometimes, because we are not defined by them. There are many accounts of brilliantly happy saints who went hungry, voluntarily eschewed sex, lived in times of turmoil, did not have a place to lay their heads, were shunned and mistreated by their contemporaries – and achieved a level of ‘self-actualization’ beyond anything known to Maslow’s philosophy.

In the flat moral universe I’m often on about here, the temptation is to see the world as a series of pyramids, where there’s a bottom level of oppression and mistreatment to be escaped, upper levels holding lower levels down with bad intent, and a struggle to invert the pyramid, somehow.

In almost every case, such an understanding is poor. Relationships are both more complex and subject to much more variety than can be even roughly approximated by layer in a triangle.

Pyramid thinking: a bad idea.

Talkin’ Bout the Weather Some More

One of my current web addictions (1) is the Contra Costa County Flood Control and Water Conservation District  rain gauges page:

First 3 gauges of 29 total.

The Flood Control district maintains 29 automated rain gauges scattered around the 804 square miles of Contra Costa County. This table is automatically updated at the top of the hour, and a quarter after and a quarter til.

I’ve put together a little Google sheet that does a little math, where I can grab the data off this page and paste it in to get some percentages, totals and averages:

Bottom left corner of my rain totals spreadsheet. I’m tracking gauges that meet or exceed their average total rain year inches (24 out of 29 so far), calculating some percentages, and, defying all that is holy, doing some totals and averages across gauges.

Now, if you’re a math guy, and especially a science guy, this little snippet should make your head explode – so, so wrong! Doing totals, percentages and averages by gauge makes complete sense (however limited its use), but doing so *across* gauges?!? Huh?

(Here’s where I expose – to perhaps well-deserved ridicule – how a science-loving non-scientist goes about analysing some data. The key step for me is, as always, philosophical: what am I looking at? What can it tell me in theory? What does it tell me in practice? These are questions that must be answered before you even bother to look at the numbers. Failure to do so is by far the most common technical failure in the Science! news I read: the writer doesn’t know what he’s looking at, doesn’t know the limits of what it can tell him, and then doesn’t understand what it is actually telling him. Stupidity and/or dishonesty is the dominant non-technical problem.)

The sneaky-bad part is that, until you think about it, it sort of makes sense: aren’t I getting an average for rainfall across Contra Costa County? No, I am not – the best I’m getting is the average of a bunch of point samples that are related in a manner that is not clearly understood.

First off, to think that an average of the gauges tells you something about rainfall in general over the area throughout which the gauges are deployed is making some assumptions. These 29 rain gauges represent, at best, a few square feet of the 804 square miles of CCC. Well? Are we supposing that these gauges are representative (whatever that might mean) of the other 803.9999 square miles? Why would we think that? What would we mean by it?

Why are there 29 gauges?  Why not just use one? More obviously, why are the totals at each gauge so different? Season total averages run from 11 inches up to 33 inches, and this year the differences in actual rainfall are at least as pronounced.

Contra Costa County is made up of at least 3 pretty distinct areas: The west-facing slopes of the Richmond/El Cerrito hills and the flats between them and the Bay, extensive hilly areas with a couple of hilly interior valleys punctuated by a big mountain (about 2/3 of the total area), and some flats on the delta to the far east.

Found here. The blue dotted lines do not represent partition based on geographical features. If one wanted to do that, the left hand line would be rotated about 60 degrees clockwise and moved west a bit,  and the right hand one pivoted about 60 degrees counterclockwise from the top point. Then you’d have something like rough climate zones. Very rough, as the south to north differences – farther from water differences – are not captured, and they can make a big difference.

Close to the center of this map is Mount Diablo (DBL 22). This year, Mount Diablo has gotten over 51 inches of rain, which is, according to my fun little spread sheet, 186%  of season average – and we’ve got a couple months more to go.

Immediately to the north of Mount Diablo are two gauges – the Concord Pavilion (CCP 43) and Kregor Peak (KGR 38). These two gauges are among the 5 remaining gauges that have not yet reached their season average total so far. In fact, while Mount Diablo is almost 2 feet of rain over for the season so far, these two are about 3 and a half and 5 and a half inches under. The other three gauges that have not hit their seasonal average total yet are much closer, and might hit them with the storms coming this weekend.

How could this happen? Two gauges within a couple of miles of Mount Diablo are not even getting average rainfall, while the mountain stands to get twice its average.

Consider this current predicted rain map:

Wish I’d have thought to capture yesterday’s, as it was much clearer.

Note Hawaii at the bottom center. That long line of rain from Hawaii to California is pretty much what the weather people call an atmospheric river – a Pinapple Express. This one, which blew through our neighborhood early this morning, was nothing like the size of the last couple. That stuff out to the east looks a bit more exciting. Zoomed in a little:


That thing that looks like a swirl? It is. When it reaches California in the wee hours of Friday, the rain will be pushed from south to north along the stronger, leading edge.

Speculating here: This puts (CCP 43) and (KGR 38) in Mount Diablo’s rain shadow. Gauges just south of Mount Diablo are all above average; the two directly north of it are below.

In a more typical Northern California rain year (2) the storms come down from the Gulf of Alaska, maybe or maybe not picking up some tropical moisture, and hit pretty much directly west to east. (CCP 43) and (KGR 38) would, in such cases, not be in the rain shadow of Mount Diablo, and might therefore get more rain, comparatively, to years like the one we’re having now (3). Thus, the season averages don’t really tell us what to expect. They are useless, really for predictions, as what they tell you is more like what a blended picture of two or more (you can have both Gulf of Alaska storms and some tropical stuff in the same year, for example) mechanisms by which California gets rain and snow.

So, what am I getting if I average Mount Diablo with Concord Pavilion and Kregor Peak? Should I take the average of only 2 out of 3? Add some more gauges? It will make a difference. Fundamentally, there’s nothing magic about these 29 gauges or about the number 29 – we could add or subtract gauges to the mix, or even double count some we think particularly important or ‘representative’. There’s nothing to stop us, it might even make sense, under certain assumptions.

Nope, what my averages across gauges tells me is not that we’re 130% of season average rainfall so far in Contra Costa County. What it tells me is that the average across the gauges is 130% of the average of the total season rainfall for each gauge – and that is all. Which is not all that helpful, and is only interesting in a vaguely cabalistic sort of way.

The point, if any, is that sometimes what may look like reasonable numbers to look at do not, in fact, tell you much. And that I’m a LITTLE bit obsessive on occasion. In a fun way! Really!

  1. Other web addictions include: boat building (the 1337 woodworking skillz and empirical engineering fascinate me. Lapstrake for the win!), Sci Fi short films (there are a million of these, some quite good)  and primitive iron smelting (there’s a band out there named Bog Iron Bloom – wish I’da thought of that!). In my fantasy world, I’d dig my own bog iron, smelt it in a clay brick furnace, hammer it into an axe and iron nails, chop down some oak and build a Viking long ship – and make a Sci Fi short film about it! I’d need to find some people who don’t get sea sick to sail it for me, but I’m imagining that’s the least of the problems with this plan.
  2. This is when the discussion gets weird: our entire sample size upon which we base our assumption of ‘typical’ is only about 150 years long, and only a fraction of that has anything like the widespread measure-taking we use now. The oldest CCC Water District gauge dates back to only 1937; most are either from the 1970s  – or since 2000. What would be an appropriate timeframe? 10,000 years? 100,000? Why or why not? Certainly, based on physical evidence, (and there are more recent updates that show even more variation I can’t seem to lay my hands on at the moment)  over 10,000 years, the averages would be different – and over 100,000 years, the median prediction would be: much colder, with a chance of more snow.
  3. If in fact we have more than one year like this – so far, I’ve only heard things like a 1 in 25 year, but the year isn’t over yet. This seems to me to be a very unusual year, one not captured well by rain gauges such as those discussed above. How many rain and snow gauges are there in the 6,000+ square mile drainage of the Feather River? Because the Oroville Dam is almost 50 years old – and this is the first time the emergency spillway has been used. And there’s more rain on the way, and a massive snowpack to melt. In other words, are we really capturing the full extent of this precipitation year? The physical evidence – reservoirs around the state at or near capacity with a couple months of rain still to go – suggests we’re not.

The Mystery of Workforce Participation

I’m willing to bet that workforce participation in hunter-gatherer societies is darn near 100% for the key demographic of men aged 24 – 54. Ya know? So, if all we want is high workforce participation, all we’d have to do is return to a hunter-gatherer economy.

Men aged 25-54, fully employed

Despite the great progress toward a return to just such an impoverished, backward economy made here in recent years(1), I’m thinking there’s more to it, for example, how many of us are willing to tolerate living in a less than total employment economy for benefits such as cell phones, indoor plumbing and hospitals.

Yet, invariably, any report of a decrease in the percentage of the adult working age population in the workforce is greeted as bad news. This one, for example. And, I hasten to add, it may be. But it might not be.

Question: are the countries more desireable to live in as one moves left to right on this chart? If not, why would one care about workforce participation? Should I prefer to live in Mexico or the Slovak Republic because a higher percentage of males work? Or does pay and opportunity figure into it? 

The implied judgement: Higher and increasing workforce participation = good; lower and falling workforce participation = bad. Is this true? Or only sort of true within a certain range and for certain people?

Wouldn’t it be nice if the percentage of working aged people who no longer worked because they didn’t have to were routinely reported?  Sure, one imagines that many of the people not in the workforce would like to have a good paying job. One also imagines that most of the working age people in or out of the workforce would love to be in a position that they didn’t have to work unless they wanted to. That’s me, for sure – I’d retire in a heartbeat if I could, if, somehow, a few million bucks fell in my lap. Why not? I have plenty to keep me busy and entertained and useful every waking hour. Imagine all the essential blogging I’d do! Or not.

Starting 0ver 20 years ago, it became quite possible, even common, for some mid-level guy or gal in the tech field to get some stock options or make a few astute stock purchases and then, a few years later, find themselves sitting on millions in assets. A lot of those folks looked in the mirror one morning, and thought: I don’t actually have to go to work anymore. And some of those soon didn’t. And more still do every day.

Rather than being some sort of problem, this is in fact the outcome of a free market most to be desired from an individual’s perspective. If I had a few million in the bank-equivelent, think of all the good I could do! Think of all the time I’d have!

Now think of a nation with a growing number of such people, people who are attached in some sense to their money because they did, in some sense, earn it. What a wonderful place that would be! (That’s also the nightmare of statists everywhere, but that’s another story.)  If you were married or the adult child of such an one, you, too, might be able to not work if you didn’t want to – how cool is that? Soon, we’d have a growing pool of people with resources and time. Sure, I suppose, some waste it. But many would not – I fervently believe I would not. The possibilities of local action to make life better are endless!

So, while I’d readily believe that such people make up a small percentage of the decrease in workforce participation, should we not at least break them out of the total? Should we not celebrate them at least as much as we lament those who’d like a job but can’t get one?

On a more serious level, this equating work with prosperity and, ultimately, with personal goodness itself (the hoary Protestant Work Ethic) is merely an example of how economic reporting, reflecting economic teaching, makes things much simpler and black-and-white than they really are (2). Is growing manufacturing output a good or bad thing? How about a falling average workweek? Growing GDP? Falling consumer debt? Are these things, in and of themselves, good or bad? How can you tell? There are situations were they might be good or bad or indifferent. They might be good for some people, bad for others, and indifferent to others or on the whole. And there are plenty of  other cases like this as well. At the very least, the standard disclaimer should say something like: Within a certain range, all other things being equal. Note: all other things are never equal.


  1. Just think of the low carbon footprint! I quiver!
  2. And that’s even before you reach the Marxist/Bernie level of willful stupid. Nope, here I’m talking about economics as understood by people at least trying to make sense.