The Allure of Psychology

One thing a classic liberal education is supposed to do for you is make you suspicious of ideas you find emotionally attractive. Like the brutal honesty demanded by science, it is just assumed to rub off on students who work their way through all those tough classic texts. Just about every freshman finds Plato attractive. Like the young men who followed Socrates around just to see him lightly eviscerate some pompous fool, we thrilled to the discovery that pompous fools could be eviscerated, and craved more. Then we run into Aristotle, and don’t like it much, because he, effectively, says: enough with the fun and games, time to stand your ground and say what you mean. Perhaps some of us get the idea that Socrates would have met his match, or more, in Aristotle (although I suspect they would have gotten along pretty well while having some doozies of arguments, because they had doozies of arguments. Socrates must have been bored out of his skull with the Ions and Menos of the world.)

Then, as you move on through the list, one precious idea after another gets beat up. You think that you’ve reached the pinnacle of sophistication as an 18 year old who has learned that the only thing he knows is that he knows nothing, only to have that self-refuting notion beat up by Aristotle’s moderate realism. Then, perhaps, you see how Aristotelian metaphysics and epistemology lead to places you might not want to go, making Descartes very appealing. But Descartes leads to Hume, Berkeley, and, eventually, Kant, while Thomas leads to science. So now, maybe, Descartes is less appealing, and you take another look at Aristotle…

Thus, by a million paths, the serious student learns to take extra care about accepting too readily ideas that he finds attractive, because he finds them attractive.

When I read Alice Miller‘s books 30+ years ago, I found her ideas very attractive, even though her Freudian approach was seriously off putting. I like to say that Miler was a fallen-away Freudian, but had not fallen away nearly far enough. What made her assertions more acceptable to me was how well they fit with evolutionary theory. On the fly as I read her books, I would substitute arguments from natural selection for hers, the unholy offspring of Freud and Rousseau.

Brutal honesty moment: in other words, I back-filled psychological theories I found emotionally appealing with evolutionary just-so stories. I get it. I suppose my purpose in writing this out, apart from trying to make it as clear as possible to myself, is to invite criticism.

What are these theories? I’ve mentioned them before, but never in great detail. Here, I’m paraphrasing them based on 30 year old memories and replacing Freudian turns of phrase with Darwinian language. These start out as truisms (I should hope) but turn dark:

  • For their very survival, children need to be part of a family/tribe (Extended family – I’m just going to use ‘tribe’ from here on out). In our evolutionary environment, no children lived to reproduce outside of a tribe. Therefore, intense selection pressure has been applied to children in favor of group membership and against running off or doing anything that might get them excluded. (1)
  • As sophisticated social mammals, children by instinct incorporate whatever behaviors are required for tribal membership into their base understanding of the world as foundational assumptions. (This is nothing more than saying ‘tribalism’ is a base state for humans and is pre-rational). Kids don’t think about these requirements (much), they just are.
  • We see it in the ‘attachment-promoting behaviors’ of babies and toddlers before they are even aware of what they’re doing. As they grow, their behaviors become more complex and more specific to their particular environment. In this, people are only the most sophisticated among animals – you cat and dog do this as well.

All well and good, and I hope not too controversial. It should be noted that the reciprocal activity on the part of the adults – nurturing the tribe so that the child might survive – must also be a part of any environment of evolutionary adaptation. So parents and relative – the tribe – can be expected to behave in such a way as to promote the survival and integration into the tribe of its children. That’s the model that seems to have been developed and to have worked over the last half a million years or so, at least. There’s nothing necessarily nice or pretty about it – it’s just what works.

But what happens when, as in the modern world for the last couple hundred years in many places, many people survive despite having no tribe in the evolutionary sense? What happens when the brutal culling mechanisms of Darwinian survival get put on hold? Whatever else may happen, it is now possible on a scale and to a degree never known before for children to be neglected, abused, and traumatized – and still live, and perhaps even still reproduce.

  • Children who are neglected, abused and otherwise traumatized will, through the all but inexorable drive of instinct, incorporate their neglect, abuse and trauma into their pre-rational view of the world. Miller, in her decades of work as a psychoanalyst, noted a remarkable ability of her patients to excuse, ignore and explain away the objectively horrible things done to them – which is what one would expect, under the evolutionary explanation above. Aside: this, at least, seems to be obviously true from just routine interactions with people.
  • So we have a world increasingly filled with damaged children of all ages who, for basic survival reasons, have accepted their mistreatment at the hands of those who were supposed to love them, rationalized it, and who are highly motivated to accept it as part of their tribal membership fees.
  • It gets worse: as part of the emotional mechanisms that ‘worked’ insofar as they did in fact survive into adulthood, their experiences and coping mechanisms now become the template for how to raise any children they might have. Thus, Miller observed the pattern where someone who had been sexually abused as a child, even if they were not themselves an abuser, would routinely put their children into situations where they were likely to be abused. To do otherwise would be to confront the careful structure that allowed the parent to survive in the first place. Very painful and disorienting.
  • This is expressed in the title of one of her books: Thou Shalt Not Be Aware. To acknowledge one’s own mistreatment enough to protect one’s own child requires reopening some deep and carefully scarred over wounds. Rather than do that, we readily subject our kids to what we experienced, no matter how horrible.

Miller says that a sympathetic witness, someone who understood the trauma and abuse on some level and could tell the child that it wasn’t right, was all but essential to having any hope for healing. That witness provided a counter to all the stories the kid would otherwise make up in order to keep his membership in the tribe: that daddy didn’t mean it, that momma does really care, that what uncle did wasn’t so bad, and so on – all the little myths one runs into whenever one is drawn into other people’s dramas. Lacking such a witness, it seemed to Miller all but impossible to get past all the barricades built up by the child.

So, there you have it: I see – I think, that’s the question – people reenacting in their child’s life whatever it was that traumatized them as children: people who were abandoned at 15 abandon their own kids as teens; children of divorce get divorced; Sexually abused kids become libertines and expose their own kids to that life; and so in a million ways.

There’s more, but that’s the general outline. I’m not just saying that miserable childhoods tend to make for miserable adults. I’m saying that miserable childhoods tend to all but compel people to make their own children miserable in the same way.

Anyway, make any sense? I readily acknowledge that Miller is a loon – I read most if not all of her books, and she gets into speculation that’s little better than palm reading in many places. And, as mentioned, even though she became one of Freud’s harshest critics, she still thought and spoke like a Freudian. Am I just experiencing confirmation bias when I seem to see this inflicting of one’s childhood trauma on one’s own children everywhere I look, or is it real?

  1. And, of course, tribes can’t survive without children, either, so, at least by nature, tribes care about their children as passionately as children yearn to belong. Note that this doesn’t imply any sort of lovie-dovie niceness: the ever-popular Yanomami tribesmen raise their sons to be good little homicidal sociopaths, because that approach has been proven to work. Similarly, their daughters are raised to seek the most murderous sociopaths as mates.
  2. And then expanded, by design, to school, with its artificial and arbitrary tribes of classrooms and grades. But Miller doesn’t go there, as far as my memory can recall.
Advertisements

An Epidemic of Diagnosis Revisited, Sort Of

Walker Percy, Lost in the Cosmos

I was pretty sure I’d written about this 2007 essay in the NYT, but evidently not, according to a search of blog archives. In it, a doctor points out something that should be obvious: as we add more healthcare to our lives, we are going to get more of what healthcare produces. He notes that what healthcare produces is not health, but diagnoses. Healthcare professionals are in the business, first and foremost, of telling us what is wrong with us. Only once we’ve been diagnosed can the wheels of the machinery of the healthcare system turn.

Improvements in healthcare have helped us to generally have longer, healthier lives than at any point in history. Not as much as the real, if unromantic, big three of plenty of food, clean water and good sanitation. Get those three right – and avoid wars, but these things go together in practice – and people start generally living pretty long lives, long enough to die from old people causes like cancer and heart disease instead of dysentery and rickets.

We have more healthcare, the largest, most expensive healthcare system in history, because we’ve demanded, in the literal and economic senses, more healthcare. We’re loath to admit the human body is simply too complex and fragile, too prone to break down and suffer injury and disease, for lifelong health to be anything other than a stroke of luck or a blessing, and even more loath to come to grips with the inevitability of death. Nope, we ignore all experience, and come to think vigorous good health is what we deserve, and that something is horribly wrong if we don’t get it. We demand, pounding our little feet and throwing haystacks of cash, that Science cure cancer, prevent diseases and in general fix any and every problem we have.

We demand: Tell me what’s wrong, then FIX IT! We’ve learned, from our car mechanic if nowhere else, that sometimes it’s hard to figure out what’s wrong. We’ve internalized the reality that, sometimes, a diagnosis is just an opinion among many possible opinions. We’ve learned to seek a second or third or fourth opinion until we find one we like.

We have not learned, it seems, to distrust the diagnostic process itself.

If this is true of mundane things like cars and the human body, it is much more true of the difficult and subtle human mind and soul. We desperately seek a diagnosis, which, when we get one we like, becomes our identity. In English, we even say: I *am* on the spectrum; I *am* depressed, or gay, or transgender, or oppressed, or a million other things. To attack our diagnosis is to attack us. We are our diagnoses.

Listen for it: so often, these days, we will hear a person’s diagnosis within minutes of meeting them. “I’m a *blank*” As if *blank* is the key information, as in, it unlocks and settles everything.

As the Percy quotation above shows, in his inimical style, this need to have someone else tell us who we are is so pervasive and compelling that many millions of us turn to astrology (1) or worse just to get our diagnosis. What the diagnosis is, as he demonstrates, matters far less than that we have one. We are so lost in the cosmos that we’ll cling to anything to stay afloat, anything to distract us from facing the brutal reality of who we really are.

If only this self-delusion were the worst of it. The real killer diagnoses are projection and protection. In the first, rather than engaging with those we disagree with or otherwise find unpleasant or in any way disconcerting, we diagnose them as having a disorder that prescribes summary dismissal: they are only saying that or behaving that way because they are *blank* – religious, conservative, liberal, stupid, evil, etc. (Note that it’s the prescription that makes this a diagnostic ritual: if you simply observe someone is promoting progressive ideas, say, BUT do not use that as an excuse to not engage with them, that’s just basic intellectual awareness.)

Even worse, as mentioned under point 3 in the last post, is the diagnosis of others that lets us off the hook: my child *IS* ADHD, therefore drugs and your acceptance are the only answers. No fair looking at his home life or parental and school expectations that make demands upon the kid that he’s not interested in fulfilling and cause him massive stress – nope, it’s drugs and acceptance, AND THAT’S FINAL! (2)

But of course it’s the gender dysphoria diagnosis that’s the real killer – literally, given the suicide and self-harm rates among those confused about their sexual identity. It’s the parents who love that diagnosis, because now everything is known, the prescription is complete, and every future problem is accounted for in advance. And no finger is ever allowed to be pointed at me, no matter how chaotic and emotionally abusive my serial polygamy or just plain rutting around is for my kid. Nope, we must bless and worship his confusion, so that even he embraces the insanity. Just so long as I’m off the hook.

Marxist social criticism at its best, destroying lives, sowing unhappiness and using the weakest and most damaged among us for their political ends. Not to mention promulgating some of the stupidest ideas known to man.

  1. Have to tell this off topic story: in my early 20s, somehow got dragged into a panel discussion of astrology, as the token Thomist (I wish). So I did research – Thomas does address astrology in an interesting way – and gave my little spiel (and that’s yet another story). Afterwards, at a party at St. John’s (lived in the neighborhood), I mentioned that I’d done a bunch of research on astrology for this talk, noted that Thomas doesn’t summarily dismiss it, and an attractive young woman I barely knew said: really? What sign am I? Somehow, I managed to deadpan a reply using appropriate astrological bafflegab, after which I answered: therefore, you’re an Aquarius (or whatever). I guessed right. Her mind was blown. I managed to keep my straight face, knowing that I would be right about 8% of the time, and had gotten lucky. Not ‘lucky’ lucky – you know what I mean, get your mind out of the gutter!
  2. Mandatory disclaimer: there very well might be something to ADHD diagnoses, I don’t know, but there’s certainly something to ADHD over-diagnoses.

AI, AI, Oh.

Old MacBezos Had a Server Farm…

Free-associating there, a little. Pardon me.

Seems AI is on a lot of people’s minds these days. I, along with many, have my doubts:

My opinion: there are a lot of physical processes well suited to the very fancy automation that today is called AI. Such AI could put most underwriters, investment analysts, and hardware designers out of a job, like telegraph agents and buggy whip makers before them. I also think there’s an awful lot of the ‘we’re almost there!’ noise surrounding AI that has surrounded commercial nuclear fusion for my entire life – it’s always just around the corner, it’s always just a few technical details that need working out.

But it’s still not here. Both commercial nuclear fusion and AI, in the manner I am talking about, may come, and may even come soon. But I’m not holding my breath.

And this is not the sort of strong AI – you know, the Commander Data kind of AI – that gets human rights for robots discussions going. For philosophical reasons, I have my doubts human beings can create intellect (other than in the old fashioned baby-making way), no matter how much emergent properties handwavium is applied. Onward:

Here is the esteemed William Briggs, Statistician to the Stars, taking a shot at the “burgeoning digital afterlife industry”. Some geniuses have decided to one-up the standard Las Vegas psychic lounge routine, where by a combination of research (“hot readings”) and clever dialogue (“cold readings”), a performer can give the gullible the impression he is a mind reader, by training computers to do it.

Hot readings are cheating. Cons peek in wallets, purses, and now on the Internet, and note relevant facts, such as addresses, birthdays, and various other bits of personal information. Cold readings are when the con probes the mark, trying many different lines of inquiry—“I see the letter ‘M’”—which rely on the mark providing relevant feedback. “I had a pet duck when I was four named Missy?” “That’s it! Missy misses you from Duck Heaven.” “You can see!”

You might not believe it, but cold reading is shockingly effective. I have used it many times in practicing mentalism (mental magic), all under the guise of “scientific psychological theory.” People want to believe in psychics, and they want to believe in science maybe even more.

Briggs notes that this is a form of the Turing Test, and points to a wonderful 1990 interview of Mortimer Adler by William F. Buckley, wherein they discuss the notions of intellect,. brain, and human thought. Well worth the 10 minutes to watch.

In Machine Learning Disability, esteemed writer and theologian Brian Niemeier recounts, first, a story much like I reference in my tweet pasted in above: how a algorithm trained to do one thing – identify hit songs across many media in near real time – generates an hilarious false positive when an old pirated and memed clip goes viral.

Then it gets all serious. All this Big Data science you’ve been hearing of, and upon which the Google, Facebook and Amazon fortunes are built, is very, very iffy, no better than the Billboard algorithms that generated the false positive. Less obvious are people now using Big Data science to prove all sorts of things. In my gimlet-eyed take, doing research on giant datasets is a great way to bury your assumptions and biases so that they’re very hard to find. This, on top of the errors built in to the sampling, the methodology and algorithms themselves – errors upon errors upon errors.

As Niemeier points out, just having huge amounts of data is no guarantee you are doing good science, in in fact multiplies to opportunity to get it wrong. Briggs points out in his essay how easily people are fooled, and how doggedly they’ll stick to their beliefs even in the face of contrary evidence. You put these things together, and it’s pretty scary out there.

I’m always amazed that people who have worked around computers fall for any of this. Every geek with a shred of self-awareness (not a given by any means) has multiple stories about programs and hardware doing stupid things, how no one could have possibly imagined a user doing X, and so (best case) X crashes the system or (worse case) X propagates and goes unnoticed for years until the error is subtle, ingrained and permanent. Depending on the error, this could be bad. Big Data is a perfect environment for this latter result.

John C. Wright also gets in on the AI kerfuffle, referencing the Briggs post and adding his own inimitable comments.

Finally, Dust, a Youtube channel featuring science fiction short films, recently had an “AI Week” where the shorts were all based on AI themes. One film took a machine learning tool, fed it a bunch of Sci Fi classics and not so classics, and had it write a script, following the procedure used by short film competitions. And then shot the film. The results are always painful, but occasionally painfully funny. The actors should get Oscar nominations in the new Lucas Memorial Best Straight Faces When Saying Really Stupid Dialogue category:

Wet Enough for You? Philip Marlowe Edition

From the L.A. Times: Why L.A. is having such a wet winter after years of drought conditions. (Warning: they’ll let you look at their site for a while, then cut you off like a barkeep when closing time approaches.) Haven’t looked at the article yet, but I’ll fall off my chair if the answer doesn’t contain global warming/climate change.

But I have some ideas of my own. Historical data on seasonal rainfall totals for Los Angeles over the last 140+ year is readily available on the web. I took that data, and did a little light analysis.

Average seasonal rainfall in L.A. is 14.07″. 60% of the time, rainfall is below average; 40% above. Percentage of seasons with:

  • less than 75% of average rain: 32.62
  • between 75% and 125%: 39.01
  • over 75%: 28.37

“Normal” rainfall covers a pretty wide range, one would reasonably suppose. Getting a lot or a little seems somewhat more likely than getting somewhere around average. This fits with my experience growing up in L.A. (18 year sample size, use with caution.)

The last 20 years look like:

Season (July 1-June 30)Total Rainfall, InchesVariance from Avg
2017-20184.79-9.91
2016-201719.004.3
2015-20169.65-5.05
2014-20158.52-6.18
2013-20146.08-8.62
2012-20135.85-8.85
2011-20128.69-6.01
2010-201120.205.5
2009-201016.361.66
2008-20099.08-5.62
2007-200813.53-1.17
2006-20073.21-11.49
2005-200613.19-1.51
2004-200537.2522.55
2003-20049.24-5.46
2002-200316.491.79
2001-20024.42-10.28
2000-200117.943.24
1999-200011.57-3.13
1998-19999.09-5.61

14 years out of 20 (70%) are under average; 6 above. Those 5 years in a row stand out, as does the 9 out of 11 years under from 2005-2006 to 2015-2016. (That 22.55 inches in 2004-2005 also stands out – very wet year by L.A. standards.)

Wow, that does look bad. So does this stretch, with 7 out of 8 under:

1924-19257.38-7.32
1923-19246.67-8.03
1922-19239.59-5.11
1921-192219.664.96
1920-192113.71-0.99
1919-192012.52-2.18
1918-19198.58-6.12
1917-191813.86-0.84

And this one, with 10 out of 11:

1954-195511.94-2.76
1953-195411.99-2.71
1952-19539.46-5.24
1951-195226.2111.51
1950-19518.21-6.49
1949-195010.60-4.1
1948-19497.99-6.71
1947-19487.22-7.48
1946-194712.61-2.09
1945-194612.13-2.57
1944-194511.58-3.12

Or this, with 6 out of 7:

1964-196513.69-1.01
1963-19647.93-6.77
1962-19638.38-6.32
1961-196218.794.09
1960-19614.85-9.85
1959-19608.18-6.52
1958-19595.58-9.12

This last cherry-picked selection is also like the most recent years in that annual rainfall is not just under, but way under. This last sample shows more than 6″ under, in 5 out of 6 years. In the recent sample, 5 out of the last 7 years prior to this year were more than 6″ under, and one over 5″ under.

How often does L.A. get rainfall 6″ or more under average? About 22% of the time. So, hardly unusual, and, given a big enough sample (evidently not very big), you would expect to find the sorts of patterns we see here, even if, as it would be foolish to assume, every year’s rainfall is a completely independent event from the preceding year or years. It would make at least as much sense to think there are big, multi-year, multi-decade, multi-century and so on cycles – cycles that would take much larger samples of seasonal rainfall to detect. And those cycles could very well interact – cycles within cycles.

Problem is, I’ve got 141 years of data, so I can’t say. I suspect nobody can. Given the poorly understood cycles in the oceans and sun, and the effect of the moon on the oceans and atmosphere, which it would be reasonable to assume affect weather and rainfall, we’re far from discovering the causes of the little patterns cherry picking the data might present to us. They only tell us that rainfall seems to fall into patterns, where one dry year is often followed by one or two or even four or five more dry years. And sometimes not.

L.A. also gets stretches such as this:

1943-194419.214.51
1942-194319.174.47
1941-194211.18-3.52
1940-194132.7618.06
1939-194018.964.26
1938-193913.06-1.64
1937-193823.438.73
1936-193722.417.71
1935-193612.07-2.63
1934-193521.666.96

Not only are 7 out of 10 years wetter than average, the 3 years under average are only a little short. This would help explain why it is so often raining in Raymond Chandler stories set in L.A. – this sample of years overlaps most of his masterpieces.

Image result for philip marlowe
It could be raining outside – hard to tell, and I don’t remember. Just work with me here, OK?

The L.A. Times sees something in this data-based Rorschach test; I see nothing much. Let’s see what the article says:

Nothing. The headline writer, editor and writer evidently don’t talk to each other, as the article as published makes no attempt to answer or even address the question implied in the headline. It’s just a glorified weather report cobbled together from interviews from over the last several months. Conclusion: things seem OK, water system wise, for now, but keep some panic on slow simmer, just in case. Something like that.

Oh, well. You win some, you lose some. That *thunk* you hear is me falling out of my chair.

A Cultivated Mind

Just kidding! I think!

Here I wrote about how I’m trying to help this admirably curious young man for whom I am RCIA sponsor on his intellectual journey. I’m no Socrates, but I do know a thing or two that this young man is not going to pick up at school, that would be helpful to him and, frankly, to the world. Any efforts to get a little educated and shine a little light into the surrounding darkness seems a good thing to me.

I figure I’ll give him a single page every week or so when I see him, with the offer to talk it over whenever he’s available. Below is the content of the second page; you can see the first in the post linked above. We started off with a description of Truth and Knowledge. I figure the idea of a cultivated mind might be good next. We’ll wrap it up with a page on the Good and one on the Beautiful, and see where it goes from there.

Any thoughts/corrections appreciated.

A Cultivated Mind

A cultivated mind can consider an idea without accepting it.

What is meant by a “cultivated mind”?

Like a cultivated field:

  • Meant for things to be planted and grown in it
  • Weeded of bad habits and bad ideas
  • Is cared for daily

A cultivated mind

  • is what a civilized and educated man strives to have.
  • is not snobby or elitist.
  • Is what is required to honestly face the world.
  • Is open to new ideas, but considers them rationally before accepting them.

How do you cultivate your mind?

Reexamine the ideas you find most attractive:

  • Have you accepted them because you like them, or because you examined them and believe them true?

Carefully review all popular ideas:

  • Have you accepted them because to reject them might make you unpopular?
  • Have you really examined them before accepting them?

Double your efforts to be fair when considering ideas you do not like:

  • Can you restate the idea in terms that people who accept it would recognize and agree with? If not, you are not able to truly consider the idea.

NOTE 1: To engage ideas, listen to and read what people who hold those ideas say, especially when you don’t like them or already disagree. Hear and understand what the idea really is before you can consider it.This takes discipline and time.

NOTE 2: This is a life-long project, always subject to revision. Guard against over certainty, avoid exaggeration. Do not pretend to know what you do not know. Acknowledge that some things are difficult, and can only be known partially.

Follow the Dominican maxim: “Seldom affirm, never deny, always distinguish.”

Image result for monsters vs aliens B.O.B totally overrated

“Forgive him, but as you can see, he has no brain.” “Turns out you don’t need one. Totally overrated!”

 


How’s the Weather? 2018/2019 Update

In a recent post here you could almost hear the disappointment in the climate scientists’ words as they recounted the terrible truth: that, despite what the models were saying would happen, snowpack in the mountains of the western U.S. had not declined at all over the last 35 years. This got me thinking about the weather, as weather over time equals climate. So I looked into the history of the Sierra snowpack. Interesting stuff.

From a September 2015 article from the LA Times

This chart accompanies a September 14th, 2015 article in the LA Times: Sierra Nevada snowpack is much worse than thought: a 500-year low.

When California Gov. Jerry Brown stood in a snowless Sierra Nevada meadow on April 1 and ordered unprecedented drought restrictions, it was the first time in 75 years that the area had lacked any sign of spring snow.

Now researchers say this year’s record-low snowpack may be far more historic — and ominous — than previously realized.

A couple of commendable things stand out from this chart, and I would like to commend them: first, it is a very pleasant surprise to see the data sources acknowledged. From 1930 on, people took direct measurements of the snowpack. The way they do it today is two-fold: sticking a long, hollow calibrated pole into the snow until they hit dirt. They can simply read the numbers off the side of the pole to see how deep it is. The snow tends to stick inside the pole, which they can then weigh to see how much water is in the snow. They take these measurements in the same places on the same dates over the years, to get as close to an apples to apples comparison as they can. Very elegant and scientifilicious.

They also have many automated station that measure such things in a fancy automatic way. I assume they did it the first way back in 1930, and added the fancy way over time as the tech become available. Either way, we’re looking at actual snow more or less directly.

Today’s results from the automated system. From the California Data Exchange System.

Prior to 1930, there were no standard way of doing this, and I’d suppose, prior to the early 1800s at the earliest, nobody really thought much about doing it. Instead, modern researchers looked at tree rings to get a ballpark idea.

I have some confidence in their proxy method simply because it passes the eye test: in that first chart, the patterns and extremes in the proxies look pretty much exactly like the patterns and extremes measured more directly over the past 85 years. But that’s just a gut feel, maybe there’s some unconscious forcing going on, some understatement of uncertainty, or some other factors making the pre-1930 estimates less like the post 1930 measurements. But it’s good solid science to own up to the different nature of the numbers. We’re not doing an apples to apples comparison, even if it looks pretty reasonable.

The second thing to commend the Times on: they included this chart, even though it in fact does not support the panic mongering in the headline. It would have been very easy to leave it out, and avoid the admittedly small chance readers might notice that, while the claim that the 2015 snowpack was the lowest in 500 might conceivably be true, having a similar very low snowpack has been a pretty regular occurrence over that same 500 years. Further, they might notice those very low years have been soon followed by some really high years, without exception.

Ominous, we are told. What did happen? 2015-2016 snowpack was around the average, 2016-2017 was near record deep, 2017-2018 also around average. So far, the 2018-2019 season, as the chart from the automatic system shows, is at 128% of season to date average. What the chart doesn’t show: a huge storm is rolling in later this week, forecast to drop 5 to 8 feet of additional snow. This should put us well above the April 1 average, which date is around the usual maximum snowpack date, with 7 more weeks to go. Even without additional snow, this will be a good year. If we get a few more storm between now and April 1, it could be a very good year.

And I will predict, with high confidence, that, over the next 10 years, we’ll have one or two or maybe even 3 years well below average. Because, lacking a cause to change it, that’s been the pattern for centuries.

Just as the climate researchers mentioned in the previous post were disappointed Nature failed to comply with their models, the panic mongering of the Times 3.5 years ago has also proven inaccurate. In both cases, without even looking it up, we know what kind of answer we will be given: this is an inexplicable aberration! It will get hotter and dryer! Eventually! Or it won’t, for reasons, none of which shall entail admitting our models are wrong.

It’s a truism in weather forecasting that simply predicting tomorrow’s weather will be a lot like today’s is a really accurate method. If those researchers from the last post and the Times had simply looked at their own data and predicted future snowpacks would be a lot like past ones, they’d have been pretty accurate, too.

Still waiting for the next mega-storm season, like 1861-1862. I should hope it never happens, as it would wipe out much of California’s water infrastructure and flood out millions of people. But, if it’s going to happen anyway, I’d just as soon get to see it. Or is that too morbid?

K Street, Inundation of the State Capitol, City of Sacramento, 1862.jpg
Great Flood of 1862. Via Wikipedia.

Feser and the Galileo Trap

File:Bertini fresco of Galileo Galilei and Doge of Venice.jpg
Galileo showing the Doge of Venice how to use a telescope.

Edward Feser here tackles the irrationality on daily display via the Covington Catholic affair, and references a more detailed description of skepticism gone crazy:

As I have argued elsewhere, the attraction of political narratives that posit vast unseen conspiracies derives in part from the general tendency in modern intellectual life reflexively to suppose that “nothing is at it seems,” that reality is radically different from or even contrary to what common sense supposes it to be.  This is a misinterpretation and overgeneralization of certain cases in the history of modern science where common sense turned out to be wrong, and when applied to moral and social issues it yields variations on the “hermeneutics of suspicion” associated with thinkers like Nietzsche and Marx.  

Readers of this blog may recognize in Feser discussion above what I refer to as the Galileo Trap: the tendency or perhaps pathology that rejects all common experiences to embrace complex, difficult explanations that contradict them. In Galileo’s case, it happens that all common experiences tell you the world is stationary. Sure does not look or feel like we are moving at all. That the planet “really” is spinning at 1,000 miles an hour and whipping through space even faster proves, somehow, that all those gullible rubes relying on their lying eyes are wrong! Similar situations arise with relativity and motion in general, where the accepted science does not square with simple understanding based on common experience.

Historically, science sometimes presents explanations that, by accurately accommodating more esoteric observations, make common observations much more complicated to understand. Galileo notably failed to explain how life on the surface of a spinning globe spiraling through space could appear so bucolic. By offering a more elegant explanation of the motion of other planets, he made understanding the apparent and easily observed immobility of this one something requiring a complex account. But Galileo proved to be (more or less) correct; over the course of the next couple centuries, theories were developed and accepted that accounted for the apparent discrepancies between common appearance and reality.

We see an arrow arch through the air, slow, and fall; we see a feather fall more slowly than a rock. Somehow, we think Aristotle was stupid for failing to discover and apply Newton’s laws. While they wonderfully explain the extraordinarily difficult to see motion of the planets, they also require the introduction of a number of other factors to explain a falling leaf you can see out the kitchen window.

Thus, because in few critical areas of hard science – or, as we say here, simply science – useful, elegant and more general explanations sometimes make common experiences harder to understand, it has become common to believe it is a feature of the universe that what’s *really* going on contradicts any simple understanding. Rather than the default position being ‘stick with the simple explanation unless forced by evidence to move off it,’ the general attitude seems to be the real explanation is always hidden and contradicts appearances. This boils down to the belief we cannot trust any common, simple, direct explanations. We cannot trust tradition or authority, which tend to formulate and pass on common sense explanations, even and especially in science!

Such pessimism, as Feser calls it, is bad enough in science. It is the disaster he describes in politics and culture. Simply, it matters if you expect hidden, subtle explanations and reject common experience. You become an easy mark for conspiracy theories.

I’ve commented here on how Hegel classifies the world into enlightened people who agree with him, and the ignorant, unwashed masses who don’t. He establishes, in other words, a cool kid’s club. Oh sure, some of the little people need logic and math and other such crutches, but the pure speculative philosophers epitomized by Hegel have transcended such weakness. Marx and Freud make effusive and near-exclusive use of this approach as well. Today’s ‘woke’ population is this same idea mass-produced for general consumption.

Since at least Luther in the West, the rhetorical tool of accusing your opponent of being unenlightened, evil or both in lieu of addressing the argument itself has come to dominate public discourse.

A clue to the real attraction of conspiracy theories, I would suggest, lies in the rhetoric of theorists themselves, which is filled with self-congratulatory descriptions of those who accept such theories as “willing to think,” “educated,” “independent-minded,” and so forth, and with invective against the “uninformed” and “unthinking” “sheeple” who “blindly follow authority.” The world of the conspiracy theorist is Manichean: either you are intelligent, well-informed, and honest, and therefore question all authority and received opinion; or you accept what popular opinion or an authority says and therefore must be stupid, dishonest, and ignorant. There is no third option.

Feser traces the roots:

Crude as this dichotomy is, anyone familiar with the intellectual and cultural history of the last several hundred years might hear in it at least an echo of the rhetoric of the Enlightenment, and of much of the philosophical and political thought that has followed in its wake. The core of the Enlightenment narrative – you might call it the “official story” – is that the Western world languished for centuries in a superstitious and authoritarian darkness, in thrall to a corrupt and power-hungry Church which stifled free inquiry. Then came Science, whose brave practitioners “spoke truth to power,” liberating us from the dead hand of ecclesiastical authority and exposing the falsity of its outmoded dogmas. Ever since, all has been progress, freedom, smiles and good cheer.

If being enlightened, having raised one’s consciousness or being woke meant anything positive, it would mean coming to grips with the appalling stupidity of the “official story”. It’s also amusing that science itself is under attack. It’s a social construct of the hegemony, used to oppress us, you see. Thus the snake eats its tail: this radical skepticism owes its appeal to the rare valid cases where science showed common experiences misleading, and yet now it attacks the science which is its only non-neurotic basis.