How’s the Weather? 2018/2019 Update

In a recent post here you could almost hear the disappointment in the climate scientists’ words as they recounted the terrible truth: that, despite what the models were saying would happen, snowpack in the mountains of the western U.S. had not declined at all over the last 35 years. This got me thinking about the weather, as weather over time equals climate. So I looked into the history of the Sierra snowpack. Interesting stuff.

From a September 2015 article from the LA Times

This chart accompanies a September 14th, 2015 article in the LA Times: Sierra Nevada snowpack is much worse than thought: a 500-year low.

When California Gov. Jerry Brown stood in a snowless Sierra Nevada meadow on April 1 and ordered unprecedented drought restrictions, it was the first time in 75 years that the area had lacked any sign of spring snow.

Now researchers say this year’s record-low snowpack may be far more historic — and ominous — than previously realized.

A couple of commendable things stand out from this chart, and I would like to commend them: first, it is a very pleasant surprise to see the data sources acknowledged. From 1930 on, people took direct measurements of the snowpack. The way they do it today is two-fold: sticking a long, hollow calibrated pole into the snow until they hit dirt. They can simply read the numbers off the side of the pole to see how deep it is. The snow tends to stick inside the pole, which they can then weigh to see how much water is in the snow. They take these measurements in the same places on the same dates over the years, to get as close to an apples to apples comparison as they can. Very elegant and scientifilicious.

They also have many automated station that measure such things in a fancy automatic way. I assume they did it the first way back in 1930, and added the fancy way over time as the tech become available. Either way, we’re looking at actual snow more or less directly.

Today’s results from the automated system. From the California Data Exchange System.

Prior to 1930, there were no standard way of doing this, and I’d suppose, prior to the early 1800s at the earliest, nobody really thought much about doing it. Instead, modern researchers looked at tree rings to get a ballpark idea.

I have some confidence in their proxy method simply because it passes the eye test: in that first chart, the patterns and extremes in the proxies look pretty much exactly like the patterns and extremes measured more directly over the past 85 years. But that’s just a gut feel, maybe there’s some unconscious forcing going on, some understatement of uncertainty, or some other factors making the pre-1930 estimates less like the post 1930 measurements. But it’s good solid science to own up to the different nature of the numbers. We’re not doing an apples to apples comparison, even if it looks pretty reasonable.

The second thing to commend the Times on: they included this chart, even though it in fact does not support the panic mongering in the headline. It would have been very easy to leave it out, and avoid the admittedly small chance readers might notice that, while the claim that the 2015 snowpack was the lowest in 500 might conceivably be true, having a similar very low snowpack has been a pretty regular occurrence over that same 500 years. Further, they might notice those very low years have been soon followed by some really high years, without exception.

Ominous, we are told. What did happen? 2015-2016 snowpack was around the average, 2016-2017 was near record deep, 2017-2018 also around average. So far, the 2018-2019 season, as the chart from the automatic system shows, is at 128% of season to date average. What the chart doesn’t show: a huge storm is rolling in later this week, forecast to drop 5 to 8 feet of additional snow. This should put us well above the April 1 average, which date is around the usual maximum snowpack date, with 7 more weeks to go. Even without additional snow, this will be a good year. If we get a few more storm between now and April 1, it could be a very good year.

And I will predict, with high confidence, that, over the next 10 years, we’ll have one or two or maybe even 3 years well below average. Because, lacking a cause to change it, that’s been the pattern for centuries.

Just as the climate researchers mentioned in the previous post were disappointed Nature failed to comply with their models, the panic mongering of the Times 3.5 years ago has also proven inaccurate. In both cases, without even looking it up, we know what kind of answer we will be given: this is an inexplicable aberration! It will get hotter and dryer! Eventually! Or it won’t, for reasons, none of which shall entail admitting our models are wrong.

It’s a truism in weather forecasting that simply predicting tomorrow’s weather will be a lot like today’s is a really accurate method. If those researchers from the last post and the Times had simply looked at their own data and predicted future snowpacks would be a lot like past ones, they’d have been pretty accurate, too.

Still waiting for the next mega-storm season, like 1861-1862. I should hope it never happens, as it would wipe out much of California’s water infrastructure and flood out millions of people. But, if it’s going to happen anyway, I’d just as soon get to see it. Or is that too morbid?

K Street, Inundation of the State Capitol, City of Sacramento, 1862.jpg
Great Flood of 1862. Via Wikipedia.
Advertisements

Some Monday Links and Asides

In the too cool department: Some animated satellite orbits, with representations of speed, altitude, etc., all linked up so that if you click on anything, you get background information.  Looks like this: 

This is just a picture. The original is animated so that you can get a feel for the relative speeds involved, and has the links.  

I found it trying to research a sky-hook type element for a story, you know, to make it all sciency and stuff, and then of course burned an hour or two checking it out. (Not about to do pages of math to figure this out, but will google around a bit.) Did you know that there’s an  Inter-Agency Space Debris Coordination Committee ? I do now. If you go to the link and click on Graveyard Orbit, that’s the kind of stuff that will turn up. 

Not sure if I’m happy or sad there was no internet when I was a kid. 

David Warren is at it again, elegantly and wittily telling us we’re all so, so doomed. He pens such gems as: 

It is the policy of the High Doganate to discourage rioting, even in France.

and 

For many of the older citizens, this must bring 1968 to mind. I know that I felt a twinge (ah, to be fifteen again, and wiser than those passing through their “terrible twos”). Indeed, Paris — where I once learnt the cobbles are numbered on the bottom so they may be put back in place after they’ve been used for missiles — has been unusually peaceful this last half-century. There used to be a revolution every ten or twenty years, and lesser annual uprisings over this and that. I can understand nostalgia.

and

But, according at least to me, the French are not unrepresentative of humankind. 

I have not made it to France yet, although I have flown over it to get to Italy a couple times. Should try to make it before I die, or before Notre Dame is replaced with a victory mosque. Whichever comes first. 

The phrase “sans-culottes” is one of those many phrases or words I have to google every time I see it. Just won’t stick. So here’s an experiment: if I blog about forgetting it, will I remember it? 

If this works, you may be seeing a lot more words and phrases I can’t seem to remember. 

Life in the past is neither rosy nor relentlessly desperate, even if it did run more often than not a lot closer to the relentlessly desperate end of the scale. But just because we would panic and despair if we were forced to live as medieval peasants doesn’t mean they were panicking and despairing. Mostly, it seems their lives were pretty OK by them. They certainly had the time and energy to build a large number of very nice buildings, for example, something relentlessly starving, desperate people can’t really do. You don’t build things that take lifetimes to complete if you’ve despaired. 

Was thinking of the Battle of Towton, specifically the Towton grave. This was the bloodiest battle of the War of the Roses; the Towton grave contains a few dozen of the estimated 28,000 men who died that day. On the one hand, this battle reinforces the notion that the Middle Ages were barbarically violent. On the other hand, the 38 men who were buried in the Towton grave were, first of all, fit enough and far enough from starvation to fight. In an article I of course can’t find at the moment (my google-foo has failed me!) the writers described that the bones were of men age about 16 to 50, mostly sturdy individuals with, for example, their teeth largely intact – at least, intact right up until a broad axe to the face loosened them up a bit.

Even the healed wounds tell of a life somewhat short of total desperation. Most all the skeletons showed signs of healed over injury, many having taken – and recovered from! – blows to the head serious enough to leave evidence of trauma in the skull bones! Yikes! But this shows that wounds were not always fatal, that the Medievals knew enough to take care of them hygienically enough that the body could heal even serious wounds, at least some of the time. 

Life was hard, death was close, but not so hard or close that it was not well worth living to the people living it, it seems. These people were fighting to the death, but not killing themselves in any great numbers. Instead, they built churches. Hmmm.

Thanks to all for the suggestions for what my mother in law might want to watch next. Taking a break from watching shows where good-looking people with charming accents kill each other in beautiful locales, she has settled into watching Heartland, a multi-generational soap opera – with horses! Attractive people with, sadly, bland American accents galavant around beautiful countrysides riding, taking care of talking about horses. The horses are pretty.

The plot and subplots as far as I’ve made out walking through the living room while the show is on seem to involve a lot of dramatic confrontations and arguments. So far, nobody has killed anybody that I’ve noticed. This thing has been in production for 10 season, of which Helen has gotten through maybe 2 – so there’s plenty of time still. 

The few minutes of it I have watched in passing illustrate a plot device similar to the notorious Idiot Ball: drive the plot by having people wildly overreact to every challenge and situation. On my occasional forays through the living room, I’ve seen characters engaged in vein-throbbing confrontations over business ideas, whether somebody loves her horse enough, trespassing, and butting in. Like Idiot Ball, if the people would stay calm and ask and answer a few reasonable questions, life would go on – but the show would not. Every routine interaction must become an existential crisis or challenge to somebody’s manhood or something. 

I’ve not watched more than a few minutes of soap operas over a lifetime to this point. I imagine this craziness is of the nature of the beast? 

Assorted Thursday Links & Comments

A. Cool maps: How land is used in the contiguous 48 states.

useage map
From Bloomberg https://www.bloomberg.com/graphics/2018-us-land-use/

Comments: heard somewhere that the entire world could be fed from California’s Central Valley alone. Given the commentary accompanying these maps, that’s not hard to believe: only a relatively small fraction of US farmland is used to directly grow food for humans. Mostly, it’s ethanol, animal feed – and fallow land. The Central Valley is, like so many things here in California, upscale – they grow almonds, watermelons and sweet corn instead of wheat, rice and beans. Nothing wrong with comparative ‘luxury’ foods, and I’d imagine the ‘feed the whole world’ claim might ignore the need to lets fields rest occasionally, but it’s easy to imagine a world’s worth of rice and beans coming from the world’s single best agricultural region.

The enormous amount of land classified as ‘grazing’ also gave me pause. Included are vast swaths of Nevada, Utah and New Mexico where, having seen those area, I’d have to think if you were a herdsman, you’d be grazing not too many very tough animals, like longhorn or goats. Those areas look a lot more like deserts than pastures. Be that as it may, having driven through them occasionally, I have never seen very many cattle. Classifying land as grazing land doesn’t seem to mean anything much is grazing on it.

Finally, the Eastern quarter of the US seems to be largely forests. This comports with my experience. As soon as you get out of the larger cities, you more often than not end up among the trees. (Of course, I’m driving, so these areas = what you can see from the road in my experience.) My experience of California is that about 1/3 of it is covered in forests as well – but the classification system puts national and state parks in another category, so those forests are not forests. Still, driving around or flying over the northern third of the state would certainly give the impression you’re seeing pretty much contiguous forests. But, hey, it’s a government classification system versus my lying eyes – who you going to believe?

B. Stopped clock division: School Shootings. Or not. Suffice it to say that, like Mark Twain’s death, the number of school shootings has been greatly exaggerated. Just under 240 show up on the US Department of Education’s report. NPR (doing some actual reporting. For once.) managed to confirm – 11. 238 is a little different from 11. I have a minor in math, so you can trust me on this.

There’s this concerted effort – NPR, SciAm, NYT and 538 are in on it, at least – to shake collective heads (heads often positioned above lab coats, after all) and declare that Science is Hard when presented with examples of Science! being stupid, dishonest, and laughably incompetent in some combination.

In this case, the takeaway should be: if the US Department of Education were to tell you the sun rises in the east, you’d probably want to stick your head out the window some early morning and verify. I occasionally have pointed out egregiously self-serving numbers from this source. Graduation and drop-out rates? I’d expect no more accuracy in them than in the school shooting numbers from the link.

The USDE does have a difficult task: it needs present government controlled compulsory schools as both successful enough to warrant our continued support, yet dismal enough to warrent continued calls for more money, more time, more homework – in effect, more school. Round up those drop-outs! (and home schoolers!) More homework and before and after school programs! Free college all around! Because the solution to people failing and bailing school, to graduating without a measurable trace of education is: More school!

That these claims contradict each other is never to be mentioned.

C. Not a link – who needs it, at this point? We live in an age where character assassination is not only an acceptable response to allegations we don’t like, but is frankly the only acceptable response. Used to be the sign of an educated person that they could separate the argument or claim from the person making it, and deal with each separately. A scoundrel might speak the truth, after all, and a saint might be in error.

We’ve progressed beyond such simple, might I say even binary, thinking!

A few brief highlight along the road we took to get here:

Hegel: Classic logic, including especially the Law of Non-contradiction, is for the little people. Real philosophers, who can be identified by their agreement with Hegel, just know stuff. The enlightened are enlightened by their enlightenment, and no argument, especially a logical argument, can gainsay them!

Marx: Hegel was correct, except true enlightenment consists of agreeing with Marx. We’ll call such agreement being on the Right Side of History.  If you persist in disagreeing with Marx or are even simply unaware of Marx’s views, you are on the wrong side of History, tools of oppression, and have marked yourself for culling at the earliest opportunity.

Freud: You only disagree with Freud because you’re sexually repressed. Your outrageous demands for evidence, replication, acknowledgement of other theories, and so on just mean you’re really, really sexually repressed.

Oliver Wendell Holmes Jr.: There are work-a-day lawyers and judges out there for whom my fine theories are too elevated. But you enlightened lawyers and judges, who of course can be identified by your agreement with me, have figured or are ready to figure it all out.

And so on.

Lesser minds than these 4 gentlemen will see the weapon here without feeling in the least encumbered by the wall of words disguising it. Cutting to the chase: if you disagree with me, you are not just wrong, but evil! We have no duty to understand or even acknowledge your position or claims, there could be no point to that, because you are a bad actor acting badly!

In other words, argument has been reduced to simply asserting your opponent is a bad actor. Trying to reason about it is simply more evidence of evil. Kafka trap is now the norm, and has been for decades.

This right here is a key feature of the modernism all those 19th century popes were condemning.

Science, Medicine & Me (or You)

One of the problems, or challenges, if you prefer, with medicine is that few people who become doctors do it because they love science. They become doctors typically because they want to help people, a very fine reason. But this means that when the situation calls for a more scientific examination of the evidence before them, or of the value or pertinence of this or that finding or practice, your doctor is likely operating at a disadvantage.

Ultimately, tradition, law, and their insurance companies all but force them to stick to conventional, approved approaches to everything. In general, this is not a bad thing – you certainly don’t want your doctor freestyling it when it is your health on the line. If doctors everywhere treat a given constellation of symptoms in a particular way, that would probably be the place to start. But it’s not in itself science.

I recall a couple decades ago that every time we took a kid to the pediatrician – and we loved our pediatrician – she would feel compelled to advise us not to let the little ones sleep with us, family bed style. I can just hear her teachers, or her professional bulletins or her insurance company telling her it was best practice, that kids die every year smothered in a bed with an adult, and just don’t do it. Now, of course, pediatricians in the US (it was never policy in the rest of the world) have come to realize that the benefits to both the child and the parents of having the child in the parent’s bed far outweigh the miniscule risks. Such risks effectively disappear if the child and parents are healthy and the parents sober, while the sense of comfort and attachment gained by the child and the extra sleep gained by the parents is a serious win-win.

Now a scientifically inclined person might ask about the data and methodology behind the claim that hundreds of thousands of years of natural selection had somehow gotten the whole baby sleeps with parents thing wrong.  Maybe it has – but the issue should have at least been addressed. But it wasn’t, in America, at least up until a few years ago. So every American pediatrician was expected to toe the official line, and our doctors did. For, what if you hadn’t advised parents against ‘co-sleeping’ and a baby died? You’d be asking for a lawsuit.

Another example I’ve mentioned before is salt intake. This one is a little different, in that for some people, there seems to be a fairly strong correlation between salt intake and blood pressure, so at least being concerned about it isn’t crazy.

For most people, however, there is little or no correlation between salt intake and blood pressure, at least within realistic levels of consumption. In one of the earliest studies, rats were given the human equivalent of 500g of salt a day, and their blood pressure shot right up! But humans tend to consume around 8.5g of salt a day. Um, further study would be indicated?

The science would seem to support some degree of caution regarding salt intake for people with high blood pressure. Instead, what we get are blanket recommendations that everybody – everybody! – reduce salt intake. It will save lives! Medicine cries wolf. People learn to ignore medical advise. Further, Medical Science! fails to consider what it is asking for – a complete overhaul of people’s diets. Few real people are going to do this without serious motivation. Wasting ammo on a battle not worth winning.

Again, if doctors were essentially scientists attracted to medicine for all the opportunities for scientific discovery human health presents, such errors and poor judgement might be more limited. But doctors became doctors to help people, not to debate scientific findings with them. They want to DO something. Thus, conventional medical practice is full of stuff to do for every occasion. Whether or not there’s really any science behind it is not as important, it seems, as developing practices to address issues so that medicine itself can be practiced. Clearly, for the average doctor, having *something* to do is better than having nothing to do, even when that something isn’t all that well supported or even understood.

These thoughts are on my mind because of all the trouble I’m having with blood pressure medicines at the moment. Since there’s an obvious trade-off here – somewhat higher blood pressure with a higher quality of life versus acceptably lower blood pressure but with lower quality of life – I decided I needed to do a little basic research. Here’s where I’m at after a very preliminary web search:

What I’m looking for, and have so far failed to find, is a simple population level chart, showing the correlation between blood pressure and mortality/morbidity. Of course, any usefully meaningful data would be presented in a largish set of charts or tables, broken out by such variables as age, sex, and body mass index.  But I would settle at this point for any sort of data at all, showing how much risk is added by an additional 10 points or 10 percent, or however you want to measure it, above ‘normal’ blood pressure.

For example, I’m a 60 yr old man. Each year in America, some number of 60 year old men drop dead by high blood-pressure-related illnesses. OK, so, base data is at what rate do 60 year old men drop dead from high blood pressure related diseases? Let’s say it’s .1% (just making up numbers for now) or 1 out of every 1,000 60 year old men. That’s heart attack and stroke victims, with maybe a few kidney failures in there, severe enough to kill you, a 60 year old man.

Now we ask: what effect does blood pressure have on these results? Perhaps those 60 year old men running a 120/80 BP die at only a .05% rate – one out of every 2,000 drops dead from heart attacks, strokes or other stray high BP related diseases. Perhaps those with 130/90 (these results and possible ages would be banded in real life most likely, but bear with me) die at .09% rate, while those with 150/100 die at .2% rate, and those above 150/100 die at a horrifying 1% rate, or 10 times as much as the old dudes with healthy blood pressure. These numbers would all need to average out to the .1% across the population, but a high degree of variability within the population would not be unexpected.

Or maybe something else entirely.  But I have yet to find such charts. I’ve found interesting tidbits, like FDR’s BP in 1944 when his doctor examined a by that time very ill president “was 186/108. Very high and in the range where one could anticipate damage to “end-organs” such as the heart, the kidneys and the brain.” So I gather BP in that range is very bad for you, or indicates that something else very bad for you is going on (FDR had a lot of medical issues and smoked like a chimney).

Then there’s this abstract, suggesting in the conclusion that my quest is going to be frustrated:

Abstract

Objectives

Quantitative associations between prehypertension or its two separate blood pressure (BP) ranges and cardiovascular disease (CVD) or all-cause mortality have not been reliably documented. In this study, we performed a comprehensive systematic review and meta-analysis to assess these relationships from prospective cohort studies.

Methods

We conducted a comprehensive search of PubMed (1966-June 2012) and the Cochrane Library (1988-June 2012) without language restrictions. This was supplemented by review of the references in the included studies and relevant reviews identified in the search. Prospective studies were included if they reported multivariate-adjusted relative risks (RRs) and corresponding 95% confidence intervals (CIs) of CVD or all-cause mortality with respect to prehypertension or its two BP ranges (low range: 120–129/80–84 mmHg; high range: 130–139/85–89 mmHg) at baseline. Pooled RRs were estimated using a random-effects model or a fixed-effects model depending on the between-study heterogeneity.

Results

Thirteen studies met our inclusion criteria, with 870,678 participants. Prehypertension was not associated with an increased risk of all-cause mortality either in the whole prehypertension group (RR: 1.03; 95% CI: 0.91 to 1.15, P = 0.667) or in its two separate BP ranges (low-range: RR: 0.91; 95% CI: 0.81 to 1.02, P = 0.107; high range: RR: 1.00; 95% CI: 0.95 to 1.06, P = 0.951). Prehypertension was significantly associated with a greater risk of CVD mortality (RR: 1.32; 95% CI: 1.16 to 1.50, P<0.001). When analyzed separately by two BP ranges, only high range prehypertension was related to an increased risk of CVD mortality (low-range: RR: 1.10; 95% CI: 0.92 to 1.30, P = 0.287; high range: RR: 1.26; 95% CI: 1.13 to 1.41, P<0.001).

Conclusions

From the best available prospective data, prehypertension was not associated with all-cause mortality. More high quality cohort studies stratified by BP range are needed.

Ok, so here is some information. Let’s chart it out as best we can. Here is the diagnostic banding used by the medical profession here in the US. I note it is unadjusted for age or anything else, which is fine, got to start somewhere:

  • Normal blood pressure – below 120 / 80 mm Hg.
  • Prehypertension – 120-139 / 80-89 mm Hg.
  • Stage 1 hypertension – 140-159 / 90-99 mm Hg.
  • Stage 2 hypertension – 160 / 100 mm Hg or higher.

The meta-study above further divides the prehypertension range into a high and low as follows:

  • Low Prehypertension – 120–129/80–84 mmHg
  • High Prehypertension – 130–139/85–89 mmHg

This particular study does nothing with Stage 1 and 2 hypertension – too bad. But it’s mostly those prehypertension numbers I’m worried about personally. Anyway, here’s what we’ve got so far.

BP graph 1

We will here ignore what looks like a bit of statistical hoodoo – we’re blending different studies, calculating p-values and confidence intervals to the combined results – um, maybe? Perhaps if Mr. Briggs or Mr. Flynn drops by, they can give a professional opinion. Me, I’m just – cautious. So, what’s this telling us?

If I’m reading it correctly – not a given by any stretch – we’ve determined the total relative risk or RR (a term of art, but sorta means what it seems to mean) at the base state and the three partially overlapping prehypertension states based on both systolic and diastolic BP ranges, both on a ‘All Causes’ and a cardiovascular diseases basis. What this appears to say is that a meta analysis of 13 studies of nearly a million people over several decades shows that your risk of illness from any cause increases .03 RR points, or 3% over the base value, if your BP runs a little high, but that your risk of cardiovascular  disease increases 32%. Which doesn’t exactly make sense if one assumes cardiovascular diseases are part of ‘All Causes’ – and why wouldn’t they be? – unless slightly high BP somehow reduces the sum of all other risks. Also, the analysis run over the two sub-ranges of low and high prehypertension do not look like they could possibly add up to the values over the entire prehypertension range – which could well be an artifact of the statistical analysis used. If that is the case, does not logic indicate that the results are quite a bit less certain than the p-values and confidence intervals would suggest? Again, I am very much an amatuer, so I could be a million miles off, but these are the questions that occur to me.

The critical piece missing for my purposes: what scale of risk does the RR here represent? A 32% increase in a .01% chance of Bad Things Happening is hardly worth thinking about; a 32% increase in a 20% risk of Bad Things is a whole ‘nuther kettle of fish.

I’m about researched out for the moment, will continue to google around for more information when I get a moment.

UPDATE:

Is it obvious enough that I’m a LITTLE COMPULSIVE? Just found this, at the Wiley Online Library: 

BP chart 2

Don’t know if this is annualized (per year) numbers, or a total across the entire age range, that ‘all significant’ part worries me a little, but: this seems to be saying that I, a 60 year old man, could expect a 1.8% chance of cardiovascular disease (however that’s defined) if my blood systolic blood pressure falls between 120 and 139, or, more important to my purposes, minisculely more risk than if my BP was the more desirable 120.

This is in line with what one would expect from the data in the previous chart.

Still a lot more work to do here.

Friday Flotsam

1. Zuckerberg. Ah, Zuckerberg. Not a big fan of armchair psychology unless it’s me that’s doing it. So, grain of salt and all that.

Over the years, have run into a number of people in my position: working with techies without being a techie. People in sales, PR, management, even a retired corporate psychologist. It’s remarkable how the discussion will eventually, usually pretty quickly, get around to the same issue; the blindness of successful techies to how normal people think and react. Stereotypes get that way because they’re so often accurate.

If I have a big Theory of Life, it might be described as Filter Theory: with greater or lesser intent, people are sorted and assigned roles according to filters. Nobody becomes a cop, for example, unless he can tolerate lots of rules and bureaucracy and don’t shy away from the threat of violence. The vast majority of people, it seems to me, would not make very good cops, at least according to the current job description. We find common denominators across all sorts of otherwise different people if they share a profession. (1) Nothing too profound here, just an observation to keep in mind.

Image result for zuckerberg
Our once and future robot overlord. 

Nobody can become successful in computer technology unless he can tolerate sitting in front of a screen for hours every day and stay focused on increasingly arcane minutia. People with a high need for human interaction need not apply. In fact, finding human interaction baffling or unpleasant would tend to drive people toward careers where they can be successful without having to deal too much with other human beings.

Further, there are kinds of insanity that result in sleeping in a cardboard box or padded cell; there are also kinds that result in becoming CEO or sales leader. In the case of tech, there are many, many really good guys who are aware on some level that they’re not very good at picking up what other people are feeling or thinking. These folks tend to be that sort of shy geek that is easy to love – and who rarely rises much in the hierarchy.

Then there are those who, if not out and out sociopaths, are at least blissfully unaware of how other people think and react. They just assume other people are stupid or ignorant. They are confident that things would go so much better if only they were in charge. In a tech environment, these people tend to become management. Sometimes – woe to us! – they even come up with a good enough idea that they found a company or 3.

Thus, we get the spectacle of Zuckerman. I believe he really, truly does not get how hopelessly arrogant and frankly stupid he looks to normal people. The most terrifying aspect: he’s rich enough to have gotten away with it so far. His ego is probably utterly impenetrable. He is absolutely sure the only problem here is that everybody else is stupid.

I passionately hope somebody finds a way to put him in jail for a year or two. That’s about the only hope we have of getting through to these fools. It’s a slim hope, but it’s about all we’ve got.

2. A discussion of this article took place on this blog. Here we have Science! in all its glory: some sample of people in nations around the world are asked, using a variety of ‘instruments’ no doubt, about how ‘religious’ they are and how ‘happy’ they are. Then, tossing all this ‘data’ in a blender, we are called to conclude that the more religious the people in an area are, the more unhappy the people in that area will be.

Where to even start? Note first of all that it’s not claimed that the it’s same people – in other words, one set of people might be very religious and happy, while another set, let’s say a bigger set, is mildly irreligious and very miserable. The average – whatever that might mean! Average of what, exactly? – might show relatively high religiosity on average and relatively high misery on average, but miss entirely *who* was happy and who was miserable.

Really, too much stupidity to be sorted through. Let’s landry-list this thing, at least the high points:

  • Reification. To plot the graphs shown, you would need *numbers*. Happiness, sadness, religiosity are NOT in ANY WAY numerical. Nobody is 0.7 happy, nor 28.334 sad, nor 87% religious. Do not pass go until you understand this. It is simply nonsense to assign numbers to responses on a poll and act like you can then add them up and perform math on them. Simple and complete nonsense. Cooking up an ‘instrument’ that forces people to give numerical answers doesn’t magically make the thing numerical.
  • Polls. Undefined terms. So some undergrad needing extra credit shoves a poll into somebody somewhere who has time to answer polling questions, and asks something like: on a scale of 1 to 10, how happy are you? Somebody says 8. Somebody else says 6. Yet another person says 3. Well? Who is happier? WE DON’T KNOW!!!! Happiness is not numerical, and, even if it were, 3 people will each have his own unique and possibly mutually exclusive ideas of what happiness means.
  • Self reporting. In America, one routinely asks ‘how you doing?’ and routinely gets a reply such as ‘fine’. In Italy, nobody asks how you are doing, because the answer will be a litany of ills. Yet we assume without any objective check that the American who says 8 is really twice as happy that the Italian who says 4?
  • Cultural differences. See above. Even apart from individual differences, some cultures consider themselves happy, others consider it bad form to tout one’s happiness. Yet all answers are treated as the same.
  • Religion. The poll assumes that Calvinism is a religion in the same way Islam is, or Hinduism, Buddhism or every flavor of Animism is. Just no. The concept of a devout Animist is absurd. Calling Buddhism a religion in the same way Lutheranism is a religion is absurd. Within each subset, similar problems are revealed by a moment’s reflection: Catholics – a group I know fairly well – consist both of those who were last in a church when baptised and will next be in a church for their funeral, who couldn’t give an account of what the Church believes, who nonetheless see themselves as devout, and those who attend daily Mass and study the catechism, who nonetheless feel themselves but meager Catholics. We count them all the same?

Image result for happy baby
This baby is EXACTLY 9.7365 happy. EXACTLY! It’s SCIENCE!

And so on, across problems with language – do the terms mean the same things across all languages? – sampling questions, consistency, methodology – non of which matters in the least because HAPPINESS AND RELIGIOSITY ARE NOT NUMBERS.

If you call yourself a scientist or even a supporter of science, and fell for this, you are an ignorant fool. Not to put too fine a point on it.

3. Looks like we’re done with the rainy season here in Contra Costa County and perhaps the state as a whole. Last storms are petering out in the eastern mountains, and nothing else is forecast. We typically get very little rain after March.

I got a weighted average of 72.26% (speaking of ridiculous claims of accuracy – but hey, it’s math!) of average rainfall over the 30 rainfall gauges of the Contra Costa Flood Control District. Last year, we had 178% even over 29 gauges. Over the last 2 years, according to my highly suspect but probably about right methodology, we got 125% of average rainfall.

So? I don’t know, but it seems to me we should probably not have to worry about water supply now, except the long-term worry about how we capture, distribute and use it. How about a 50 year project to improve water capture, reduce transportation system loses, examine if we’re using water wisely and returning a large chunk of the Delta to wetlands? Instead of shrill panic? A man’s gotta dream.

  1. A favorite example from childhood: read an article, probably in Sports Illustrated, where a guy claimed to be able to tell whether a professional American football player played offence or defense just by looking at his locker: offensive players would have all their stuff neatly hung up and organized; defensive players would just stuff their gear or pile it on the floor. Why? because offensive players who reach professional level have to be able to execute a very specific and detailed plan for each play, while defensive players are filtered by their ability to disrupt those detailed plans. In the article, an exception was pointed out: there was an offensive lineman in this particular locker room whose gear was piled on the floor. A moment of interrogation revealed he’d been a defensive lineman until switched to offense in the pros.

Links. Science! The Usual.

Image result for forbidden kingdom
It is said, master and student, walk their path side by side… to share their destiny, until their paths go separate ways.

Don’t want to start out too critical of what very well might be legitimate efforts to understand the brain and how people make decisions, but The Brains Behind Behavioral Science article from a mag called Behavioral Scientist seems to offer observations about as profound as Lu Yan’s comments to Jason Tripidikas in Forbidden Kingdom referenced above, but without the intention of making a joke. For example:

Crucially, by predicting—instead of passively registering—our environment, predictive coding allows our brain to conserve cognitive resources and guide our perception and action in a fast and efficient way. But this also means that what our brain notices and attends to is heavily determined by what we already know.

Ooooh-kay. In English: we tend to look for and notice familiar things in familiar environments. Since that would be what makes a familiar environment familiar, I’m not sure we got anywhere here.

The major contention, OK as a basis of scientific exploration as long as accompanied by awareness of the limits of such a view, is that the mind (human behavior standing in, in this case) is the way it is because the brain is the way it is. As a working hypothesis, such a notion might allow something to be discovered about the relationship of thought and volition to the physical state and capacities of the brain. Not bloody likely, but maybe. Such a view does not allow one to pass metaphysical go, nor collect 200 Kantian thalers, real or otherwise.

The essay continues:

From this perspective, it is easy to see how predictive coding explains our tendency to spot confirming evidence more readily than disconfirming evidence. And because most of these predictions are performed unconsciously, we are unaware of how our prior beliefs blend with new information from the real world. When it comes to explaining cognitive quirks like the confirmation bias, the brain is basically an engine of prediction.

That word – easy – I don’t think it means what you think it means. Also, the mind and perhaps the brain boggles at the notion of demonstrating the brain’s nature as a predictive engine. Basically, thoughts as an expression of brain activity is a tricky concept, to say the least. That materialists want it to be so doesn’t make it any less tricky.

By using neuroscience to prune behavioral concepts to relevant brain substrates (! – ed.), we can rationalize the zoo of biases. The outcome would be a simpler framework, with a map of behaviors observed in different situations linked to core cognitive functions. Such simplification has already begun and could both help communication among behavioral scientists and lead fundamental and applied research in new directions.

Our suspicions are confirmed. “Rationalize a zoo of biases.” Hmmm. Note that the writer is a behavioral scientist (whatever that might be) expressing her hope that the “zoo” – the diverse, animated collection of biases that seems to be her subject matter – can be rationalized, by which she clearly means organized in a more understandable way, by use of simple principles to be discovered through neuroscience. Note that this hope is expressed as a simple fact: “we can rationalize…” not as the more sane and scientific “we just might maybe be able to rationalize…” Nope, by applying the same sort of neuroscience by which we have gained rich insight into the inner spiritual life of dead salmon, we will – not may, not might – we WILL “prune behavioral concepts to relevant brain substrates.”

She gives this example:

For instance, by studying the way brains change as we age, neuroscientists can help address one of the major challenges for the next generation of behavioral scientists: how to target behavioral interventions for the vastly different brains of people of different ages, cultures, and socioeconomic levels.

Apart from the mere woolly incoherence of the above quotation, I for one would really not want the sort of thinker who could emit such a thought doing any sort of “behavioral interventions” on me under any circumstances.

It gets worse:

To assess differences among individuals, one objective alternative is “neural indexes.” Neural indexes are brain signatures of specific behaviors. Modern neuroscience has demonstrated that we can now use neural indexes to spot behavioral biases in different populations. Many cognitive biases (like risk aversion, the endowment effect, or framing effects) have already been reduced to specific brain structures or networks, enabling neuroscientists to expand the samples to people of different ages.

Aaaaand – the reference is a link to yet another fMRI study. TL;DR much past the pretty pictures. I will give them this: in the opening paragraphs I did read, the researchers use the word ‘suggests’ to describe certain much-to-be-hoped-for conclusions. Very consistent with proper scientific restraint in the face of the massive, hulking, shadow-casting unknowns that haunt the scientific mind (even one as modest as mine) when contemplating what is being claimed.  Contrast this with the casual confidence mentioned above. I merely note that unless some breakthrough has happened in the last 2 years that I’ve completely missed – unlikely – fMRI studies make phrenology look hard-science-y by comparison. Dead salmon, and all.

So perhaps some restraint would be in order, a little shadow of doubt?

Moving on, saw this on Twitter, I think. It seemed appropo:

Yet, here’s another Twitter grab (I must figure out how one embeds these things!)

Psych diretion

See here for my basic take on the often desperate looking attempts to distract people from the ongoing fraud that is sociological and psychological ‘research’ – poorly defined questions researched via dubious protocols and never replicated are published as ‘studies’ – that then, as the writer above notes, become the basis of public policy and popular culture.

(This reminds me – there’s a blog draft in the folder where I trace a particularly egregious example of ‘nothing to see here, citizens, move along’ through its permutations over time, where a study that had very publicly been used to beat conservatives was shown to actually have found the exact opposite conclusion – and so now needed to be poo-pooed into dissipating vapors. Need to finish that one…)

Now on to cheerier news:

Here is updated the story of honeybee hive collapse, a cautionary tale about needing to understand the problem before panicking and formulating drastic solution. This is perhaps a good one to point out for my own sake, since I failed to think it through myself, and thus missed the obvious point: honeybees are livestock, animals domesticated, bred and cared for by people. ‘Wild’ honeybees, such as the hive we used to have in our front yard, are really feral – their ancestors escaped at some point from domesticated hives first brought over by English settlers 3 or 4 centuries ago.

Thus, the solution to hive collapse is not to be found, generally, in improving the natural environment, but in improving the applicable animal husbandry. And so it has happened: if hive collapse is reducing honeybee populations by up to 40%, then apiarists are going to breed more of them to make for it – because bees are raised to pollinate crops and produce honey.  As a bee farmer, I’m going to do what I can to have the right numbers of bees available for my business.

So we can pretty much stop panicking over hive collapse. Keep an eye on it, just don’t panic.

Finally, here’s a cool picture related to a recent blog post here:

While evil never sleeps, and there’s plenty wrong with the world, it serves no positive purpose to ignore real gains in the material basis for general human happiness. Real, concrete problems correctly understood can call forth real, concrete solutions that actually solve something – this chart is, I think, a monument to just such thinking. But focused problem-solving won’t bring the revolution any closer, and just might cause it to be postponed indefinitely – so it must be avoided and ridiculed at every step in the eyes of certain interests.

Friday. Link. Graph.

A. Good story here from Calah Alexander: Why you should let your kids take risks — especially when they might fail.

I’ve said that I’d never let my kids try a 10-day  (unsupervised European trip – ed) in college, because what if what could have been for me comes true for them? What if they get lost, or mugged? What if they make a poor decision, choose the wrong stop, and get stranded outside an airport in a blizzard? What if they need help and can’t find it?

That one major snafu on our 10-day happened at the end, when we missed our flight back to Rome because we got off the train at the wrong stop. The airport in Brussels wouldn’t let us spend the night inside, so we huddled against the building instead, trying to stay out of the snow. The only thing we had to eat was a backpack full of Cadbury chocolates that my roommate had gotten in London.

As a parent, this story is terrifying. But it’s one of my favorite memories. We made it back to Rome cold, tired, sick of Cadbury, but alive and newly aware of our own resilience (and of the importance of navigational skills).

Ironically, protecting our kids from the pain of failure is itself a failure. It’s failing to let them experience the life we know is coming at them, the life we can’t protect them from forever.

Real choices matter to the kid, are supported by the family, and have real consequences. Leave out any of those three things, and the choosing is an illusion.

One final thing to add: kids also need to see adults sticking with the results of their own decisions. If mommy and daddy are running away – from their responsibilities, their spouses, their own kids – it becomes pretty much a given that the kids will grow up into bitter, whiny irresponsible brats. We wouldn’t want that to happen.

B. Another chart showing something or other:

It’s from Pew, whose methodology is both widely respected and, to give them the benefit of the doubt,  hopelessly flawed. In general, unverified self reporting by the  sort of people willing to take polls, with no concern wasted considering if the responder is at all motivated to tell the truth. (1) The questions tacitly assume that the world really does fall into convenient polar positions on virtually every subject. Which would be really, really convenient – for pollsters. So don’t give Pew polls much weight, in general.

By happenstance, about the same time I saw this I read a quip somewhere, to the effect that ‘Sir,’ Ma’am,’ and ‘Thank you’ will get you farther than a bachelor’s degree. Had to wonder: what’s the overlap between those red bars above and people who would nod at the folk wisdom of that quip? I’d quibble that a bachelor’s in something real PLUS the proper use of sir, ma’am and thank you is the real winning strategy. Nevertheless, with Pew, is often not difficult to see which of the two either/or points of view they’re hammering the world into they want us to consider enlightened.

  1. I’ve wondered since the election about the reported 8% of blacks who voted for Trump. I believe the number was based on exit polls. Now, imagine, in the general atmosphere of the last election, if a black person would feel completely comfortable telling a stranger with a clipboard that he’d just voted for Trump. Not saying one way or the other about what the results show – just that the method used is ignoring a pretty big potential issue when it fails to account for social pressures, or just assumes they cancel out.

C. Something stupid for your possible amusement:

Something about rabbits and chickens, creatures with largely unearned reputations as pacifists, going all Wild West there’s-a-new-sheriff-in-town that cracks me up a little.  One struggles a little coming up with the proper Darwinian just-so story that explains such odd behavior away.  Why are the chickens not content to let the rabbits kill each other if they want to? Have they adopted them, somehow?

D. Apologies. This is plain stupid. This is what an adolescent sense of humor,  + <45 seconds of  web searching  + <10 minutes of  MS Paint will get you:

Female lawmakers ‘bare arms’ in sleeveless attire to support new House dress code

Bear Arms