AI, AI, Oh.

Old MacBezos Had a Server Farm…

Free-associating there, a little. Pardon me.

Seems AI is on a lot of people’s minds these days. I, along with many, have my doubts:

My opinion: there are a lot of physical processes well suited to the very fancy automation that today is called AI. Such AI could put most underwriters, investment analysts, and hardware designers out of a job, like telegraph agents and buggy whip makers before them. I also think there’s an awful lot of the ‘we’re almost there!’ noise surrounding AI that has surrounded commercial nuclear fusion for my entire life – it’s always just around the corner, it’s always just a few technical details that need working out.

But it’s still not here. Both commercial nuclear fusion and AI, in the manner I am talking about, may come, and may even come soon. But I’m not holding my breath.

And this is not the sort of strong AI – you know, the Commander Data kind of AI – that gets human rights for robots discussions going. For philosophical reasons, I have my doubts human beings can create intellect (other than in the old fashioned baby-making way), no matter how much emergent properties handwavium is applied. Onward:

Here is the esteemed William Briggs, Statistician to the Stars, taking a shot at the “burgeoning digital afterlife industry”. Some geniuses have decided to one-up the standard Las Vegas psychic lounge routine, where by a combination of research (“hot readings”) and clever dialogue (“cold readings”), a performer can give the gullible the impression he is a mind reader, by training computers to do it.

Hot readings are cheating. Cons peek in wallets, purses, and now on the Internet, and note relevant facts, such as addresses, birthdays, and various other bits of personal information. Cold readings are when the con probes the mark, trying many different lines of inquiry—“I see the letter ‘M’”—which rely on the mark providing relevant feedback. “I had a pet duck when I was four named Missy?” “That’s it! Missy misses you from Duck Heaven.” “You can see!”

You might not believe it, but cold reading is shockingly effective. I have used it many times in practicing mentalism (mental magic), all under the guise of “scientific psychological theory.” People want to believe in psychics, and they want to believe in science maybe even more.

Briggs notes that this is a form of the Turing Test, and points to a wonderful 1990 interview of Mortimer Adler by William F. Buckley, wherein they discuss the notions of intellect,. brain, and human thought. Well worth the 10 minutes to watch.

In Machine Learning Disability, esteemed writer and theologian Brian Niemeier recounts, first, a story much like I reference in my tweet pasted in above: how a algorithm trained to do one thing – identify hit songs across many media in near real time – generates an hilarious false positive when an old pirated and memed clip goes viral.

Then it gets all serious. All this Big Data science you’ve been hearing of, and upon which the Google, Facebook and Amazon fortunes are built, is very, very iffy, no better than the Billboard algorithms that generated the false positive. Less obvious are people now using Big Data science to prove all sorts of things. In my gimlet-eyed take, doing research on giant datasets is a great way to bury your assumptions and biases so that they’re very hard to find. This, on top of the errors built in to the sampling, the methodology and algorithms themselves – errors upon errors upon errors.

As Niemeier points out, just having huge amounts of data is no guarantee you are doing good science, in in fact multiplies to opportunity to get it wrong. Briggs points out in his essay how easily people are fooled, and how doggedly they’ll stick to their beliefs even in the face of contrary evidence. You put these things together, and it’s pretty scary out there.

I’m always amazed that people who have worked around computers fall for any of this. Every geek with a shred of self-awareness (not a given by any means) has multiple stories about programs and hardware doing stupid things, how no one could have possibly imagined a user doing X, and so (best case) X crashes the system or (worse case) X propagates and goes unnoticed for years until the error is subtle, ingrained and permanent. Depending on the error, this could be bad. Big Data is a perfect environment for this latter result.

John C. Wright also gets in on the AI kerfuffle, referencing the Briggs post and adding his own inimitable comments.

Finally, Dust, a Youtube channel featuring science fiction short films, recently had an “AI Week” where the shorts were all based on AI themes. One film took a machine learning tool, fed it a bunch of Sci Fi classics and not so classics, and had it write a script, following the procedure used by short film competitions. And then shot the film. The results are always painful, but occasionally painfully funny. The actors should get Oscar nominations in the new Lucas Memorial Best Straight Faces When Saying Really Stupid Dialogue category:

Advertisements

Book Review: William Torrey Harris – The Philosophy of Education, Lecure III

LECTURE III. January 21st, 1893. OPPOSITION BETWEEN PESTALOZZI AND HERBART AS EDUCATIONAL LEADERS. (found here. Lecture I review here, Lecture II here.)

This lecture is one run-on paragraph. I will break it up for convenience of discussion:

Pestalozzi laid great stress on sense-perception as the foundation of all school education. Herbart lays stress on the elaboration of sense-perception or rather upon the mental reaction against the impressions made on our senses. Thought goes back of the object to understand and explain its origin, how it became to be what it is, what purpose it is to serve. Thought sees objects in the perspective of their history. It studies causes and purposes.

The Herbart Harris refers to here is one Johann Friedrich Herbart,
(1776 – 1841) a German philosopher, psychologist and founder of the academic field of pedagogy. His principles of education are roughly Platonic, as he sees the fulfillment of the individual as only possible as a member of a civilization. Man is a political animal, after all, so no argument there on a general level. The trick here is implied in the phrase ‘productive citizen’ which Wikipedia uses to describe Herbart’s meaningful relationship between a man and his civilization. Does man derive his meaning and value from being a productive citizen? Or does the whole idea of a productive citizen depend on people having value and meaning prior to any production? In the first case, it might be logical and even merciful to cull any people – can’t really call them members of society in this context – who are not productive, since they cannot have meaningful lives without such production. Not that such an idea would occur to any Germans of that time…

Herbart is also said to be a follower of Pestalozzi, which supports my suspicion that Pestalozzi is more a Rorschach test than an actual teacher. My forays into Pestalozzi’s writings left me thinking he is nearly completely incoherent; when Fichte, a proto-Nazi, and Einstein, who was a student at a Pestalozzian school, both praise his methods, one has got to wonder if they are talking about the same thing. Herbart is said to differ from Pestalozzi in that Pestalozzi believed everything is built on sense perceptions, while Herbart believes cogitation on sense perceptions is the source of understanding and knowledge.

If that sounds a bit gobbly-goopy, it may be because it is. You get these men who want desperately to control how children learn – Fichte, Mann, Dewey, heck, Plato and on and on – and they start fighting over stuff that normal people, eve normal philosophers, would roll their eyes at. Watch a kid, especially a really small kid, and you’ll see someone obsessed with sense perception to the point where they’ll stick crap they pick up off the ground into their mouths (this is a big learning experience, btw. We don’t stop doing this because we’re told to, but because we insisted on doing it). AND one will see little minds working overtime to figure out how stuff works. It’s not that sense perception or cogitation is more or less important, but rather that it’s absurd upon inspection to imagine that adults need to do anything to promote either. Adults just need to refrain from screwing it up, which seems beyond the reach of these gentlemen.

I’m not going any deeper into Herbart, who I first heard of from these lectures, for now – this is all from a skim of Wikipedia, for which I promise to feel bad about later. Onward:

Thus thought is not as the disciples of Pestalozzi hold, a continued and elevated sort of sense-perception, but rather a reaction against it. It is a discovery of the subordinate place held by objects in the world ; they are seen to be mere steps in a process of manifestation, the manifestation of causal energies. A new perception is received into the mind by adjusting it to our previous knowledge ; we explain it in terms of the old ; we classify it, identify it ; reconcile what is strange and unfamiliar in it with previous experience; we interpret the object and comprehend it ; we translate the unknown into the known.

People learn by experiencing the world, thinking about what they experienced and trying as best they can to fit it in with everything else they know. Got it.

Does Harris suppose we can do anything about it? Does Harris imagine the process he (following Kant, more or less) describes ought to be somehow promoted or encouraged, let alone managed? That would be hubris-ridden nonsense, like believing the sun will not rise unless the shaman performs the correct rituals. You might as well try to teach kids hearts how to beat. But maybe that’s not where he’s going.

This process of adjusting, explaining, classifying, identifying, reconciling, interpreting and translating, is called apperception.

Yep, Kant. Apperception is one of those terms of art in Philosophy, pretty much meaning what Harris described above.

We must not only perceive, but we must apperceive ; not only see and hear, but digest or assimilate what we hear and see. Herbart’s “apperception ” is far more important for education than Pestalozzi’s “perception.” At first the memory was the chief faculty cultivated in education; then Pestalozzi reformed it by making the culture of sense- perception the chief aim; now with Herbart the chief aim would be apperception or the mental digestion of what is received by perception or memory.

Hmmm. How far back is the phrase “at first” meant to go? Certainly not all the way back to the Greeks, who before Socrates’s time had come to understand education as a function of friendship. They didn’t even write about how kids learned reading, writing and basic math, any more than they wrote about how you went to the market or walked down the street. Instead, the wrote about ephebia – schools for young men entering adulthood, where they spent 2 or 3 years training to be fit soldiers and learning how to be good citizens – why they should love their city-state and Greek culture in general. Then, the most promising and noble youths were taken under the wings of men of achievement, who acted as mentors, as described peripherally in Plato’s Symposium. (The occasional sexual aspects of these relationships, while real, are generally overstated and misunderstood.) An educated Greek would memorize Homer, but even that feat had the primary goal of immersion into Greek culture, especially understanding arete, the excellence toward which every Greek aspired and the measure by which they would be judged.

Or there’s St. Jerome’s 5th century advice to the noblewoman Laeta how she should teach her daughter Paula to read. This is not memorization training, at least not essentially. The essential part is the sharpening of Paula’s wit.

More Enlightenment (sic) nonsense: Harris and his crowd thought they were the smart people, first people to understand these things, and had a right and duty to guide lesser individuals. They started with memorization, therefore, the whole project starts with memorization. That people have successfully educated their children for as long as there have been people if acknowledged at all is pooh-poohed: maybe, but not educating them correctly!

Illustrations of the power of apperception to strengthen perception: Cuvier could reconstruct the entire skeleton from a single bone ; Agassiz the entire fish from one of its scales ; Winckelman the entire statue from a fragment of the face; Lyell could see its history in a pebble; Asa Gray the history of a tree by a glance.

OK, I suppose, although I’d want a serious look at those reconstructions of Cuvier, Agassiz and Winckelman before conceding the point to quite that level. Be that as it may, I’m not sure such levels of expertise are the product of a particular kind of schooling. Not to give him too much credit, but Malcolm Gladwell in his book Blink describes a similar if not identical result, except that the process by which an expert reaches his conclusion is mostly not conscious or even strictly rational. That level of expertise seems to be learned, but not taught, and to require some innate talent. Herbart, at least, is a blank slater – he doesn’t believe in innate talents. It the turtles of nurture all the way down.

Apperception adds to the perceived object its process of becoming. Noire has illustrated apperception by showing the two series of ideas called up by the perception of a piece of bread. First the regressive series dough, flour, rye ; and the processes baking, kneading, grinding, threshing, harvesting, planting, &c. Each one of these has collateral series, as for example, planting has plowing, plow, oxen, yoke, furrow, harrowing, sowing seeds, covering it, etc. The second series is progressive bread suggests its uses and functions; food, eating, digesting, organic tissue, life, nourishing strength, supply of heat, bodily labor, &c.

Ok, again. Yes, understanding something does mean putting it into a larger, more coherent, context.

The course of study in schools must be arranged so as to prepare the mind for quick apperception of what is studied. The Pestalozzian makes form, number, and language the elements of all knowledge. He unfortunately omits causal ideas, which are the chief factors of apperception ; we build our series on causally. Accidental association satisfies only the simpleminded and empty-headed.

Sure. Perhaps the course of study could be comparatively brief encounters with a mentor, who guides and reviews, and comparatively large amounts of time to experience and process the world?

I suspect that’s not where Harris is going with this.

Next up: Lecture IV.

42

With a growing backlog of books to review (Polanyi: what a fraud! Oops, sorry, should have spoilered that!) and about 120 draft posts to clean up/finish/toss/whatever, I digress:

If you already know the answer to life, the universe and everything, such that your dearest, most heartfelt belief is that everything is explained and all ends known with certainty, all discussions either support the conclusion, or are irrelevant noise. The very idea that something, something real or even some line of thought, might not fit in with the already known and sacred conclusion is anathema. Those who insist on bringing up challenges to The Answer are to be silenced with extreme prejudice.

The only worthy intellectual exercise is explaining and expanding on just exactly how 42 is the answer. An intellectual exercises his mind and creativity in coming up with ever more ingenious and detailed ways of getting to 42. The new ways 42 is demonstrated to be the one and only answer is a great comfort to the true believer, and a shield and bulwark against any line of thinking that might cause unease.

This much should be obvious. A little more subtle: Since 42 is the answer beyond challenge, any way of getting to 42 is valid regardless of the method used. 42 is beyond logic, beyond criticism of any kind. It explains – it must explain! It explains everything! – all attempts to unseat it. While it might be possible to have esthetic arguments about how one way to get to 42 is more elegant or thorough or technically accurate, it would be bad form to criticize the logic or structure or heaven forbid, the truth of any explication, so long as it gets to 42 in the end.

From a purely pragmatic point of view, it might be helpful if some of the observations upon which the presentation (it won’t do to call it an argument) are true, or that some of the connections proposed (again, can’t invoke logic) are obvious and reasonably granted. When Polanyi and Marx point to the suffering of the urban poor when industry replaced rural life with slum life, they are pointing to something real. The emotional appeal is also real – what sort of heartless monster would be indifferent to the suffering of the children?

File:Child Labor in United States, coal mines Pennsylvania.jpg
Breaker Boys

Suffering, especially suffering that primarily benefits somebody else, is nothing to be laughed at. Ignoring the suffering of others is a bad thing (under a moral code that recognizes right and wrong, of course). Yet identifying suffering is not the same thing as understanding what causes suffering. Even less is it an argument for whatever solution one might want to propose.

Ultimately, the truth of the observations, references and connections made as part of the presentation meant to demonstrate the truth of 42 do not reflect – are not allowed to reflect! – on whether 42 is in fact the answer. Quite the contrary: 42 becomes the filter used to determine what lines will be pursued and which will be ignored, and what tidbits of reality will be allowed to intrude. Marx and Polanyi have their defenders, rabid defenders, even, despite reality and history (you know, what happened, as opposed to mythical History, which make things happen in the future). The Soviet Union didn’t quite pan out? Well, Polanyi was right about the Asian Financial Crisis! (Except for the part where it was a hiccup in the now 75 year long planet-wide rise in economic productivity and subsequent drop in poverty and violence. Places where the likes of Polanyi are taken seriously being the exceptions, of course). Workers of the world are still not revolting (they have, increasingly, nothing to lose but there vacation packages, hi-def flat screens, second automobiles and iPhones).

The existence of injustice in the world – and there’s plenty to go around, don’t get me wrong – does not in fact prove anything about whether 42 is the answer or not. Describing problems is cheap; solutions are not, and may not even be possible.

Your math proving 42 not add up to 42? No problem! You got the right answer, that’s what counts.

Wet Enough for You? Philip Marlowe Edition

From the L.A. Times: Why L.A. is having such a wet winter after years of drought conditions. (Warning: they’ll let you look at their site for a while, then cut you off like a barkeep when closing time approaches.) Haven’t looked at the article yet, but I’ll fall off my chair if the answer doesn’t contain global warming/climate change.

But I have some ideas of my own. Historical data on seasonal rainfall totals for Los Angeles over the last 140+ year is readily available on the web. I took that data, and did a little light analysis.

Average seasonal rainfall in L.A. is 14.07″. 60% of the time, rainfall is below average; 40% above. Percentage of seasons with:

  • less than 75% of average rain: 32.62
  • between 75% and 125%: 39.01
  • over 75%: 28.37

“Normal” rainfall covers a pretty wide range, one would reasonably suppose. Getting a lot or a little seems somewhat more likely than getting somewhere around average. This fits with my experience growing up in L.A. (18 year sample size, use with caution.)

The last 20 years look like:

Season (July 1-June 30)Total Rainfall, InchesVariance from Avg
2017-20184.79-9.91
2016-201719.004.3
2015-20169.65-5.05
2014-20158.52-6.18
2013-20146.08-8.62
2012-20135.85-8.85
2011-20128.69-6.01
2010-201120.205.5
2009-201016.361.66
2008-20099.08-5.62
2007-200813.53-1.17
2006-20073.21-11.49
2005-200613.19-1.51
2004-200537.2522.55
2003-20049.24-5.46
2002-200316.491.79
2001-20024.42-10.28
2000-200117.943.24
1999-200011.57-3.13
1998-19999.09-5.61

14 years out of 20 (70%) are under average; 6 above. Those 5 years in a row stand out, as does the 9 out of 11 years under from 2005-2006 to 2015-2016. (That 22.55 inches in 2004-2005 also stands out – very wet year by L.A. standards.)

Wow, that does look bad. So does this stretch, with 7 out of 8 under:

1924-19257.38-7.32
1923-19246.67-8.03
1922-19239.59-5.11
1921-192219.664.96
1920-192113.71-0.99
1919-192012.52-2.18
1918-19198.58-6.12
1917-191813.86-0.84

And this one, with 10 out of 11:

1954-195511.94-2.76
1953-195411.99-2.71
1952-19539.46-5.24
1951-195226.2111.51
1950-19518.21-6.49
1949-195010.60-4.1
1948-19497.99-6.71
1947-19487.22-7.48
1946-194712.61-2.09
1945-194612.13-2.57
1944-194511.58-3.12

Or this, with 6 out of 7:

1964-196513.69-1.01
1963-19647.93-6.77
1962-19638.38-6.32
1961-196218.794.09
1960-19614.85-9.85
1959-19608.18-6.52
1958-19595.58-9.12

This last cherry-picked selection is also like the most recent years in that annual rainfall is not just under, but way under. This last sample shows more than 6″ under, in 5 out of 6 years. In the recent sample, 5 out of the last 7 years prior to this year were more than 6″ under, and one over 5″ under.

How often does L.A. get rainfall 6″ or more under average? About 22% of the time. So, hardly unusual, and, given a big enough sample (evidently not very big), you would expect to find the sorts of patterns we see here, even if, as it would be foolish to assume, every year’s rainfall is a completely independent event from the preceding year or years. It would make at least as much sense to think there are big, multi-year, multi-decade, multi-century and so on cycles – cycles that would take much larger samples of seasonal rainfall to detect. And those cycles could very well interact – cycles within cycles.

Problem is, I’ve got 141 years of data, so I can’t say. I suspect nobody can. Given the poorly understood cycles in the oceans and sun, and the effect of the moon on the oceans and atmosphere, which it would be reasonable to assume affect weather and rainfall, we’re far from discovering the causes of the little patterns cherry picking the data might present to us. They only tell us that rainfall seems to fall into patterns, where one dry year is often followed by one or two or even four or five more dry years. And sometimes not.

L.A. also gets stretches such as this:

1943-194419.214.51
1942-194319.174.47
1941-194211.18-3.52
1940-194132.7618.06
1939-194018.964.26
1938-193913.06-1.64
1937-193823.438.73
1936-193722.417.71
1935-193612.07-2.63
1934-193521.666.96

Not only are 7 out of 10 years wetter than average, the 3 years under average are only a little short. This would help explain why it is so often raining in Raymond Chandler stories set in L.A. – this sample of years overlaps most of his masterpieces.

Image result for philip marlowe
It could be raining outside – hard to tell, and I don’t remember. Just work with me here, OK?

The L.A. Times sees something in this data-based Rorschach test; I see nothing much. Let’s see what the article says:

Nothing. The headline writer, editor and writer evidently don’t talk to each other, as the article as published makes no attempt to answer or even address the question implied in the headline. It’s just a glorified weather report cobbled together from interviews from over the last several months. Conclusion: things seem OK, water system wise, for now, but keep some panic on slow simmer, just in case. Something like that.

Oh, well. You win some, you lose some. That *thunk* you hear is me falling out of my chair.

A Cultivated Mind

Just kidding! I think!

Here I wrote about how I’m trying to help this admirably curious young man for whom I am RCIA sponsor on his intellectual journey. I’m no Socrates, but I do know a thing or two that this young man is not going to pick up at school, that would be helpful to him and, frankly, to the world. Any efforts to get a little educated and shine a little light into the surrounding darkness seems a good thing to me.

I figure I’ll give him a single page every week or so when I see him, with the offer to talk it over whenever he’s available. Below is the content of the second page; you can see the first in the post linked above. We started off with a description of Truth and Knowledge. I figure the idea of a cultivated mind might be good next. We’ll wrap it up with a page on the Good and one on the Beautiful, and see where it goes from there.

Any thoughts/corrections appreciated.

A Cultivated Mind

A cultivated mind can consider an idea without accepting it.

What is meant by a “cultivated mind”?

Like a cultivated field:

  • Meant for things to be planted and grown in it
  • Weeded of bad habits and bad ideas
  • Is cared for daily

A cultivated mind

  • is what a civilized and educated man strives to have.
  • is not snobby or elitist.
  • Is what is required to honestly face the world.
  • Is open to new ideas, but considers them rationally before accepting them.

How do you cultivate your mind?

Reexamine the ideas you find most attractive:

  • Have you accepted them because you like them, or because you examined them and believe them true?

Carefully review all popular ideas:

  • Have you accepted them because to reject them might make you unpopular?
  • Have you really examined them before accepting them?

Double your efforts to be fair when considering ideas you do not like:

  • Can you restate the idea in terms that people who accept it would recognize and agree with? If not, you are not able to truly consider the idea.

NOTE 1: To engage ideas, listen to and read what people who hold those ideas say, especially when you don’t like them or already disagree. Hear and understand what the idea really is before you can consider it.This takes discipline and time.

NOTE 2: This is a life-long project, always subject to revision. Guard against over certainty, avoid exaggeration. Do not pretend to know what you do not know. Acknowledge that some things are difficult, and can only be known partially.

Follow the Dominican maxim: “Seldom affirm, never deny, always distinguish.”

Image result for monsters vs aliens B.O.B totally overrated

“Forgive him, but as you can see, he has no brain.” “Turns out you don’t need one. Totally overrated!”

 


How’s the Weather? 2018/2019 Update

In a recent post here you could almost hear the disappointment in the climate scientists’ words as they recounted the terrible truth: that, despite what the models were saying would happen, snowpack in the mountains of the western U.S. had not declined at all over the last 35 years. This got me thinking about the weather, as weather over time equals climate. So I looked into the history of the Sierra snowpack. Interesting stuff.

From a September 2015 article from the LA Times

This chart accompanies a September 14th, 2015 article in the LA Times: Sierra Nevada snowpack is much worse than thought: a 500-year low.

When California Gov. Jerry Brown stood in a snowless Sierra Nevada meadow on April 1 and ordered unprecedented drought restrictions, it was the first time in 75 years that the area had lacked any sign of spring snow.

Now researchers say this year’s record-low snowpack may be far more historic — and ominous — than previously realized.

A couple of commendable things stand out from this chart, and I would like to commend them: first, it is a very pleasant surprise to see the data sources acknowledged. From 1930 on, people took direct measurements of the snowpack. The way they do it today is two-fold: sticking a long, hollow calibrated pole into the snow until they hit dirt. They can simply read the numbers off the side of the pole to see how deep it is. The snow tends to stick inside the pole, which they can then weigh to see how much water is in the snow. They take these measurements in the same places on the same dates over the years, to get as close to an apples to apples comparison as they can. Very elegant and scientifilicious.

They also have many automated station that measure such things in a fancy automatic way. I assume they did it the first way back in 1930, and added the fancy way over time as the tech become available. Either way, we’re looking at actual snow more or less directly.

Today’s results from the automated system. From the California Data Exchange System.

Prior to 1930, there were no standard way of doing this, and I’d suppose, prior to the early 1800s at the earliest, nobody really thought much about doing it. Instead, modern researchers looked at tree rings to get a ballpark idea.

I have some confidence in their proxy method simply because it passes the eye test: in that first chart, the patterns and extremes in the proxies look pretty much exactly like the patterns and extremes measured more directly over the past 85 years. But that’s just a gut feel, maybe there’s some unconscious forcing going on, some understatement of uncertainty, or some other factors making the pre-1930 estimates less like the post 1930 measurements. But it’s good solid science to own up to the different nature of the numbers. We’re not doing an apples to apples comparison, even if it looks pretty reasonable.

The second thing to commend the Times on: they included this chart, even though it in fact does not support the panic mongering in the headline. It would have been very easy to leave it out, and avoid the admittedly small chance readers might notice that, while the claim that the 2015 snowpack was the lowest in 500 might conceivably be true, having a similar very low snowpack has been a pretty regular occurrence over that same 500 years. Further, they might notice those very low years have been soon followed by some really high years, without exception.

Ominous, we are told. What did happen? 2015-2016 snowpack was around the average, 2016-2017 was near record deep, 2017-2018 also around average. So far, the 2018-2019 season, as the chart from the automatic system shows, is at 128% of season to date average. What the chart doesn’t show: a huge storm is rolling in later this week, forecast to drop 5 to 8 feet of additional snow. This should put us well above the April 1 average, which date is around the usual maximum snowpack date, with 7 more weeks to go. Even without additional snow, this will be a good year. If we get a few more storm between now and April 1, it could be a very good year.

And I will predict, with high confidence, that, over the next 10 years, we’ll have one or two or maybe even 3 years well below average. Because, lacking a cause to change it, that’s been the pattern for centuries.

Just as the climate researchers mentioned in the previous post were disappointed Nature failed to comply with their models, the panic mongering of the Times 3.5 years ago has also proven inaccurate. In both cases, without even looking it up, we know what kind of answer we will be given: this is an inexplicable aberration! It will get hotter and dryer! Eventually! Or it won’t, for reasons, none of which shall entail admitting our models are wrong.

It’s a truism in weather forecasting that simply predicting tomorrow’s weather will be a lot like today’s is a really accurate method. If those researchers from the last post and the Times had simply looked at their own data and predicted future snowpacks would be a lot like past ones, they’d have been pretty accurate, too.

Still waiting for the next mega-storm season, like 1861-1862. I should hope it never happens, as it would wipe out much of California’s water infrastructure and flood out millions of people. But, if it’s going to happen anyway, I’d just as soon get to see it. Or is that too morbid?

K Street, Inundation of the State Capitol, City of Sacramento, 1862.jpg
Great Flood of 1862. Via Wikipedia.

Feser and the Galileo Trap

File:Bertini fresco of Galileo Galilei and Doge of Venice.jpg
Galileo showing the Doge of Venice how to use a telescope.

Edward Feser here tackles the irrationality on daily display via the Covington Catholic affair, and references a more detailed description of skepticism gone crazy:

As I have argued elsewhere, the attraction of political narratives that posit vast unseen conspiracies derives in part from the general tendency in modern intellectual life reflexively to suppose that “nothing is at it seems,” that reality is radically different from or even contrary to what common sense supposes it to be.  This is a misinterpretation and overgeneralization of certain cases in the history of modern science where common sense turned out to be wrong, and when applied to moral and social issues it yields variations on the “hermeneutics of suspicion” associated with thinkers like Nietzsche and Marx.  

Readers of this blog may recognize in Feser discussion above what I refer to as the Galileo Trap: the tendency or perhaps pathology that rejects all common experiences to embrace complex, difficult explanations that contradict them. In Galileo’s case, it happens that all common experiences tell you the world is stationary. Sure does not look or feel like we are moving at all. That the planet “really” is spinning at 1,000 miles an hour and whipping through space even faster proves, somehow, that all those gullible rubes relying on their lying eyes are wrong! Similar situations arise with relativity and motion in general, where the accepted science does not square with simple understanding based on common experience.

Historically, science sometimes presents explanations that, by accurately accommodating more esoteric observations, make common observations much more complicated to understand. Galileo notably failed to explain how life on the surface of a spinning globe spiraling through space could appear so bucolic. By offering a more elegant explanation of the motion of other planets, he made understanding the apparent and easily observed immobility of this one something requiring a complex account. But Galileo proved to be (more or less) correct; over the course of the next couple centuries, theories were developed and accepted that accounted for the apparent discrepancies between common appearance and reality.

We see an arrow arch through the air, slow, and fall; we see a feather fall more slowly than a rock. Somehow, we think Aristotle was stupid for failing to discover and apply Newton’s laws. While they wonderfully explain the extraordinarily difficult to see motion of the planets, they also require the introduction of a number of other factors to explain a falling leaf you can see out the kitchen window.

Thus, because in few critical areas of hard science – or, as we say here, simply science – useful, elegant and more general explanations sometimes make common experiences harder to understand, it has become common to believe it is a feature of the universe that what’s *really* going on contradicts any simple understanding. Rather than the default position being ‘stick with the simple explanation unless forced by evidence to move off it,’ the general attitude seems to be the real explanation is always hidden and contradicts appearances. This boils down to the belief we cannot trust any common, simple, direct explanations. We cannot trust tradition or authority, which tend to formulate and pass on common sense explanations, even and especially in science!

Such pessimism, as Feser calls it, is bad enough in science. It is the disaster he describes in politics and culture. Simply, it matters if you expect hidden, subtle explanations and reject common experience. You become an easy mark for conspiracy theories.

I’ve commented here on how Hegel classifies the world into enlightened people who agree with him, and the ignorant, unwashed masses who don’t. He establishes, in other words, a cool kid’s club. Oh sure, some of the little people need logic and math and other such crutches, but the pure speculative philosophers epitomized by Hegel have transcended such weakness. Marx and Freud make effusive and near-exclusive use of this approach as well. Today’s ‘woke’ population is this same idea mass-produced for general consumption.

Since at least Luther in the West, the rhetorical tool of accusing your opponent of being unenlightened, evil or both in lieu of addressing the argument itself has come to dominate public discourse.

A clue to the real attraction of conspiracy theories, I would suggest, lies in the rhetoric of theorists themselves, which is filled with self-congratulatory descriptions of those who accept such theories as “willing to think,” “educated,” “independent-minded,” and so forth, and with invective against the “uninformed” and “unthinking” “sheeple” who “blindly follow authority.” The world of the conspiracy theorist is Manichean: either you are intelligent, well-informed, and honest, and therefore question all authority and received opinion; or you accept what popular opinion or an authority says and therefore must be stupid, dishonest, and ignorant. There is no third option.

Feser traces the roots:

Crude as this dichotomy is, anyone familiar with the intellectual and cultural history of the last several hundred years might hear in it at least an echo of the rhetoric of the Enlightenment, and of much of the philosophical and political thought that has followed in its wake. The core of the Enlightenment narrative – you might call it the “official story” – is that the Western world languished for centuries in a superstitious and authoritarian darkness, in thrall to a corrupt and power-hungry Church which stifled free inquiry. Then came Science, whose brave practitioners “spoke truth to power,” liberating us from the dead hand of ecclesiastical authority and exposing the falsity of its outmoded dogmas. Ever since, all has been progress, freedom, smiles and good cheer.

If being enlightened, having raised one’s consciousness or being woke meant anything positive, it would mean coming to grips with the appalling stupidity of the “official story”. It’s also amusing that science itself is under attack. It’s a social construct of the hegemony, used to oppress us, you see. Thus the snake eats its tail: this radical skepticism owes its appeal to the rare valid cases where science showed common experiences misleading, and yet now it attacks the science which is its only non-neurotic basis.