AI, AI, Oh.

Old MacBezos Had a Server Farm…

Free-associating there, a little. Pardon me.

Seems AI is on a lot of people’s minds these days. I, along with many, have my doubts:

My opinion: there are a lot of physical processes well suited to the very fancy automation that today is called AI. Such AI could put most underwriters, investment analysts, and hardware designers out of a job, like telegraph agents and buggy whip makers before them. I also think there’s an awful lot of the ‘we’re almost there!’ noise surrounding AI that has surrounded commercial nuclear fusion for my entire life – it’s always just around the corner, it’s always just a few technical details that need working out.

But it’s still not here. Both commercial nuclear fusion and AI, in the manner I am talking about, may come, and may even come soon. But I’m not holding my breath.

And this is not the sort of strong AI – you know, the Commander Data kind of AI – that gets human rights for robots discussions going. For philosophical reasons, I have my doubts human beings can create intellect (other than in the old fashioned baby-making way), no matter how much emergent properties handwavium is applied. Onward:

Here is the esteemed William Briggs, Statistician to the Stars, taking a shot at the “burgeoning digital afterlife industry”. Some geniuses have decided to one-up the standard Las Vegas psychic lounge routine, where by a combination of research (“hot readings”) and clever dialogue (“cold readings”), a performer can give the gullible the impression he is a mind reader, by training computers to do it.

Hot readings are cheating. Cons peek in wallets, purses, and now on the Internet, and note relevant facts, such as addresses, birthdays, and various other bits of personal information. Cold readings are when the con probes the mark, trying many different lines of inquiry—“I see the letter ‘M’”—which rely on the mark providing relevant feedback. “I had a pet duck when I was four named Missy?” “That’s it! Missy misses you from Duck Heaven.” “You can see!”

You might not believe it, but cold reading is shockingly effective. I have used it many times in practicing mentalism (mental magic), all under the guise of “scientific psychological theory.” People want to believe in psychics, and they want to believe in science maybe even more.

Briggs notes that this is a form of the Turing Test, and points to a wonderful 1990 interview of Mortimer Adler by William F. Buckley, wherein they discuss the notions of intellect,. brain, and human thought. Well worth the 10 minutes to watch.

In Machine Learning Disability, esteemed writer and theologian Brian Niemeier recounts, first, a story much like I reference in my tweet pasted in above: how a algorithm trained to do one thing – identify hit songs across many media in near real time – generates an hilarious false positive when an old pirated and memed clip goes viral.

Then it gets all serious. All this Big Data science you’ve been hearing of, and upon which the Google, Facebook and Amazon fortunes are built, is very, very iffy, no better than the Billboard algorithms that generated the false positive. Less obvious are people now using Big Data science to prove all sorts of things. In my gimlet-eyed take, doing research on giant datasets is a great way to bury your assumptions and biases so that they’re very hard to find. This, on top of the errors built in to the sampling, the methodology and algorithms themselves – errors upon errors upon errors.

As Niemeier points out, just having huge amounts of data is no guarantee you are doing good science, in in fact multiplies to opportunity to get it wrong. Briggs points out in his essay how easily people are fooled, and how doggedly they’ll stick to their beliefs even in the face of contrary evidence. You put these things together, and it’s pretty scary out there.

I’m always amazed that people who have worked around computers fall for any of this. Every geek with a shred of self-awareness (not a given by any means) has multiple stories about programs and hardware doing stupid things, how no one could have possibly imagined a user doing X, and so (best case) X crashes the system or (worse case) X propagates and goes unnoticed for years until the error is subtle, ingrained and permanent. Depending on the error, this could be bad. Big Data is a perfect environment for this latter result.

John C. Wright also gets in on the AI kerfuffle, referencing the Briggs post and adding his own inimitable comments.

Finally, Dust, a Youtube channel featuring science fiction short films, recently had an “AI Week” where the shorts were all based on AI themes. One film took a machine learning tool, fed it a bunch of Sci Fi classics and not so classics, and had it write a script, following the procedure used by short film competitions. And then shot the film. The results are always painful, but occasionally painfully funny. The actors should get Oscar nominations in the new Lucas Memorial Best Straight Faces When Saying Really Stupid Dialogue category:

Advertisements

Wet Enough for You? Philip Marlowe Edition

From the L.A. Times: Why L.A. is having such a wet winter after years of drought conditions. (Warning: they’ll let you look at their site for a while, then cut you off like a barkeep when closing time approaches.) Haven’t looked at the article yet, but I’ll fall off my chair if the answer doesn’t contain global warming/climate change.

But I have some ideas of my own. Historical data on seasonal rainfall totals for Los Angeles over the last 140+ year is readily available on the web. I took that data, and did a little light analysis.

Average seasonal rainfall in L.A. is 14.07″. 60% of the time, rainfall is below average; 40% above. Percentage of seasons with:

  • less than 75% of average rain: 32.62
  • between 75% and 125%: 39.01
  • over 75%: 28.37

“Normal” rainfall covers a pretty wide range, one would reasonably suppose. Getting a lot or a little seems somewhat more likely than getting somewhere around average. This fits with my experience growing up in L.A. (18 year sample size, use with caution.)

The last 20 years look like:

Season (July 1-June 30)Total Rainfall, InchesVariance from Avg
2017-20184.79-9.91
2016-201719.004.3
2015-20169.65-5.05
2014-20158.52-6.18
2013-20146.08-8.62
2012-20135.85-8.85
2011-20128.69-6.01
2010-201120.205.5
2009-201016.361.66
2008-20099.08-5.62
2007-200813.53-1.17
2006-20073.21-11.49
2005-200613.19-1.51
2004-200537.2522.55
2003-20049.24-5.46
2002-200316.491.79
2001-20024.42-10.28
2000-200117.943.24
1999-200011.57-3.13
1998-19999.09-5.61

14 years out of 20 (70%) are under average; 6 above. Those 5 years in a row stand out, as does the 9 out of 11 years under from 2005-2006 to 2015-2016. (That 22.55 inches in 2004-2005 also stands out – very wet year by L.A. standards.)

Wow, that does look bad. So does this stretch, with 7 out of 8 under:

1924-19257.38-7.32
1923-19246.67-8.03
1922-19239.59-5.11
1921-192219.664.96
1920-192113.71-0.99
1919-192012.52-2.18
1918-19198.58-6.12
1917-191813.86-0.84

And this one, with 10 out of 11:

1954-195511.94-2.76
1953-195411.99-2.71
1952-19539.46-5.24
1951-195226.2111.51
1950-19518.21-6.49
1949-195010.60-4.1
1948-19497.99-6.71
1947-19487.22-7.48
1946-194712.61-2.09
1945-194612.13-2.57
1944-194511.58-3.12

Or this, with 6 out of 7:

1964-196513.69-1.01
1963-19647.93-6.77
1962-19638.38-6.32
1961-196218.794.09
1960-19614.85-9.85
1959-19608.18-6.52
1958-19595.58-9.12

This last cherry-picked selection is also like the most recent years in that annual rainfall is not just under, but way under. This last sample shows more than 6″ under, in 5 out of 6 years. In the recent sample, 5 out of the last 7 years prior to this year were more than 6″ under, and one over 5″ under.

How often does L.A. get rainfall 6″ or more under average? About 22% of the time. So, hardly unusual, and, given a big enough sample (evidently not very big), you would expect to find the sorts of patterns we see here, even if, as it would be foolish to assume, every year’s rainfall is a completely independent event from the preceding year or years. It would make at least as much sense to think there are big, multi-year, multi-decade, multi-century and so on cycles – cycles that would take much larger samples of seasonal rainfall to detect. And those cycles could very well interact – cycles within cycles.

Problem is, I’ve got 141 years of data, so I can’t say. I suspect nobody can. Given the poorly understood cycles in the oceans and sun, and the effect of the moon on the oceans and atmosphere, which it would be reasonable to assume affect weather and rainfall, we’re far from discovering the causes of the little patterns cherry picking the data might present to us. They only tell us that rainfall seems to fall into patterns, where one dry year is often followed by one or two or even four or five more dry years. And sometimes not.

L.A. also gets stretches such as this:

1943-194419.214.51
1942-194319.174.47
1941-194211.18-3.52
1940-194132.7618.06
1939-194018.964.26
1938-193913.06-1.64
1937-193823.438.73
1936-193722.417.71
1935-193612.07-2.63
1934-193521.666.96

Not only are 7 out of 10 years wetter than average, the 3 years under average are only a little short. This would help explain why it is so often raining in Raymond Chandler stories set in L.A. – this sample of years overlaps most of his masterpieces.

Image result for philip marlowe
It could be raining outside – hard to tell, and I don’t remember. Just work with me here, OK?

The L.A. Times sees something in this data-based Rorschach test; I see nothing much. Let’s see what the article says:

Nothing. The headline writer, editor and writer evidently don’t talk to each other, as the article as published makes no attempt to answer or even address the question implied in the headline. It’s just a glorified weather report cobbled together from interviews from over the last several months. Conclusion: things seem OK, water system wise, for now, but keep some panic on slow simmer, just in case. Something like that.

Oh, well. You win some, you lose some. That *thunk* you hear is me falling out of my chair.

How’s the Weather? 2018/2019 Update

In a recent post here you could almost hear the disappointment in the climate scientists’ words as they recounted the terrible truth: that, despite what the models were saying would happen, snowpack in the mountains of the western U.S. had not declined at all over the last 35 years. This got me thinking about the weather, as weather over time equals climate. So I looked into the history of the Sierra snowpack. Interesting stuff.

From a September 2015 article from the LA Times

This chart accompanies a September 14th, 2015 article in the LA Times: Sierra Nevada snowpack is much worse than thought: a 500-year low.

When California Gov. Jerry Brown stood in a snowless Sierra Nevada meadow on April 1 and ordered unprecedented drought restrictions, it was the first time in 75 years that the area had lacked any sign of spring snow.

Now researchers say this year’s record-low snowpack may be far more historic — and ominous — than previously realized.

A couple of commendable things stand out from this chart, and I would like to commend them: first, it is a very pleasant surprise to see the data sources acknowledged. From 1930 on, people took direct measurements of the snowpack. The way they do it today is two-fold: sticking a long, hollow calibrated pole into the snow until they hit dirt. They can simply read the numbers off the side of the pole to see how deep it is. The snow tends to stick inside the pole, which they can then weigh to see how much water is in the snow. They take these measurements in the same places on the same dates over the years, to get as close to an apples to apples comparison as they can. Very elegant and scientifilicious.

They also have many automated station that measure such things in a fancy automatic way. I assume they did it the first way back in 1930, and added the fancy way over time as the tech become available. Either way, we’re looking at actual snow more or less directly.

Today’s results from the automated system. From the California Data Exchange System.

Prior to 1930, there were no standard way of doing this, and I’d suppose, prior to the early 1800s at the earliest, nobody really thought much about doing it. Instead, modern researchers looked at tree rings to get a ballpark idea.

I have some confidence in their proxy method simply because it passes the eye test: in that first chart, the patterns and extremes in the proxies look pretty much exactly like the patterns and extremes measured more directly over the past 85 years. But that’s just a gut feel, maybe there’s some unconscious forcing going on, some understatement of uncertainty, or some other factors making the pre-1930 estimates less like the post 1930 measurements. But it’s good solid science to own up to the different nature of the numbers. We’re not doing an apples to apples comparison, even if it looks pretty reasonable.

The second thing to commend the Times on: they included this chart, even though it in fact does not support the panic mongering in the headline. It would have been very easy to leave it out, and avoid the admittedly small chance readers might notice that, while the claim that the 2015 snowpack was the lowest in 500 might conceivably be true, having a similar very low snowpack has been a pretty regular occurrence over that same 500 years. Further, they might notice those very low years have been soon followed by some really high years, without exception.

Ominous, we are told. What did happen? 2015-2016 snowpack was around the average, 2016-2017 was near record deep, 2017-2018 also around average. So far, the 2018-2019 season, as the chart from the automatic system shows, is at 128% of season to date average. What the chart doesn’t show: a huge storm is rolling in later this week, forecast to drop 5 to 8 feet of additional snow. This should put us well above the April 1 average, which date is around the usual maximum snowpack date, with 7 more weeks to go. Even without additional snow, this will be a good year. If we get a few more storm between now and April 1, it could be a very good year.

And I will predict, with high confidence, that, over the next 10 years, we’ll have one or two or maybe even 3 years well below average. Because, lacking a cause to change it, that’s been the pattern for centuries.

Just as the climate researchers mentioned in the previous post were disappointed Nature failed to comply with their models, the panic mongering of the Times 3.5 years ago has also proven inaccurate. In both cases, without even looking it up, we know what kind of answer we will be given: this is an inexplicable aberration! It will get hotter and dryer! Eventually! Or it won’t, for reasons, none of which shall entail admitting our models are wrong.

It’s a truism in weather forecasting that simply predicting tomorrow’s weather will be a lot like today’s is a really accurate method. If those researchers from the last post and the Times had simply looked at their own data and predicted future snowpacks would be a lot like past ones, they’d have been pretty accurate, too.

Still waiting for the next mega-storm season, like 1861-1862. I should hope it never happens, as it would wipe out much of California’s water infrastructure and flood out millions of people. But, if it’s going to happen anyway, I’d just as soon get to see it. Or is that too morbid?

K Street, Inundation of the State Capitol, City of Sacramento, 1862.jpg
Great Flood of 1862. Via Wikipedia.

Feser and the Galileo Trap

File:Bertini fresco of Galileo Galilei and Doge of Venice.jpg
Galileo showing the Doge of Venice how to use a telescope.

Edward Feser here tackles the irrationality on daily display via the Covington Catholic affair, and references a more detailed description of skepticism gone crazy:

As I have argued elsewhere, the attraction of political narratives that posit vast unseen conspiracies derives in part from the general tendency in modern intellectual life reflexively to suppose that “nothing is at it seems,” that reality is radically different from or even contrary to what common sense supposes it to be.  This is a misinterpretation and overgeneralization of certain cases in the history of modern science where common sense turned out to be wrong, and when applied to moral and social issues it yields variations on the “hermeneutics of suspicion” associated with thinkers like Nietzsche and Marx.  

Readers of this blog may recognize in Feser discussion above what I refer to as the Galileo Trap: the tendency or perhaps pathology that rejects all common experiences to embrace complex, difficult explanations that contradict them. In Galileo’s case, it happens that all common experiences tell you the world is stationary. Sure does not look or feel like we are moving at all. That the planet “really” is spinning at 1,000 miles an hour and whipping through space even faster proves, somehow, that all those gullible rubes relying on their lying eyes are wrong! Similar situations arise with relativity and motion in general, where the accepted science does not square with simple understanding based on common experience.

Historically, science sometimes presents explanations that, by accurately accommodating more esoteric observations, make common observations much more complicated to understand. Galileo notably failed to explain how life on the surface of a spinning globe spiraling through space could appear so bucolic. By offering a more elegant explanation of the motion of other planets, he made understanding the apparent and easily observed immobility of this one something requiring a complex account. But Galileo proved to be (more or less) correct; over the course of the next couple centuries, theories were developed and accepted that accounted for the apparent discrepancies between common appearance and reality.

We see an arrow arch through the air, slow, and fall; we see a feather fall more slowly than a rock. Somehow, we think Aristotle was stupid for failing to discover and apply Newton’s laws. While they wonderfully explain the extraordinarily difficult to see motion of the planets, they also require the introduction of a number of other factors to explain a falling leaf you can see out the kitchen window.

Thus, because in few critical areas of hard science – or, as we say here, simply science – useful, elegant and more general explanations sometimes make common experiences harder to understand, it has become common to believe it is a feature of the universe that what’s *really* going on contradicts any simple understanding. Rather than the default position being ‘stick with the simple explanation unless forced by evidence to move off it,’ the general attitude seems to be the real explanation is always hidden and contradicts appearances. This boils down to the belief we cannot trust any common, simple, direct explanations. We cannot trust tradition or authority, which tend to formulate and pass on common sense explanations, even and especially in science!

Such pessimism, as Feser calls it, is bad enough in science. It is the disaster he describes in politics and culture. Simply, it matters if you expect hidden, subtle explanations and reject common experience. You become an easy mark for conspiracy theories.

I’ve commented here on how Hegel classifies the world into enlightened people who agree with him, and the ignorant, unwashed masses who don’t. He establishes, in other words, a cool kid’s club. Oh sure, some of the little people need logic and math and other such crutches, but the pure speculative philosophers epitomized by Hegel have transcended such weakness. Marx and Freud make effusive and near-exclusive use of this approach as well. Today’s ‘woke’ population is this same idea mass-produced for general consumption.

Since at least Luther in the West, the rhetorical tool of accusing your opponent of being unenlightened, evil or both in lieu of addressing the argument itself has come to dominate public discourse.

A clue to the real attraction of conspiracy theories, I would suggest, lies in the rhetoric of theorists themselves, which is filled with self-congratulatory descriptions of those who accept such theories as “willing to think,” “educated,” “independent-minded,” and so forth, and with invective against the “uninformed” and “unthinking” “sheeple” who “blindly follow authority.” The world of the conspiracy theorist is Manichean: either you are intelligent, well-informed, and honest, and therefore question all authority and received opinion; or you accept what popular opinion or an authority says and therefore must be stupid, dishonest, and ignorant. There is no third option.

Feser traces the roots:

Crude as this dichotomy is, anyone familiar with the intellectual and cultural history of the last several hundred years might hear in it at least an echo of the rhetoric of the Enlightenment, and of much of the philosophical and political thought that has followed in its wake. The core of the Enlightenment narrative – you might call it the “official story” – is that the Western world languished for centuries in a superstitious and authoritarian darkness, in thrall to a corrupt and power-hungry Church which stifled free inquiry. Then came Science, whose brave practitioners “spoke truth to power,” liberating us from the dead hand of ecclesiastical authority and exposing the falsity of its outmoded dogmas. Ever since, all has been progress, freedom, smiles and good cheer.

If being enlightened, having raised one’s consciousness or being woke meant anything positive, it would mean coming to grips with the appalling stupidity of the “official story”. It’s also amusing that science itself is under attack. It’s a social construct of the hegemony, used to oppress us, you see. Thus the snake eats its tail: this radical skepticism owes its appeal to the rare valid cases where science showed common experiences misleading, and yet now it attacks the science which is its only non-neurotic basis.

Science! Strikes Again: Saving the Theory from the Data

An amusing headline: For 35 years, the Pacific Ocean has largely spared West’s mountain snow from effects of global warming. A “study” by “scientists” is used to explain what, in a saner world, might simply be stated as follows: “Western mountain snowpack shows no evidence of global warming over the the last 35 years.”

In the article, we learn that models predict that snowpack in Washington’s portion of the Cascade Range, for example, should have fallen by 2% – 44% over the the last 35 years, but in fact have shown no significant decline. Now a crass, narrow-minded person, clearly not in the cool kids club, might leap to the conclusion that the data here contradicts the model, therefore – you’re sitting down, right? – the model is wrong. The whole purpose and entire source of validation for a model is predictions. You build a model hoping to capture some aspect of the real world. You use this model to make concrete, measurable predictions that can be checked against the real world, to see if your model is useful. If the facts don’t match the predictions, you throw out the model and start over. This is called science.

Here, instead, the study invokes a cause not in the model. We know this cause was not in the model since, if it were in the model, the model would have presumably produced useful predictions.

“There were a lot of discussions within the department of the surprising stability of the western U.S. snowpack, because it went against the predictions,” said co-author Cristian Proistosescu, a postdoctoral researcher at the UW’s Joint Institute for the Study of the Atmosphere and Ocean.

The discussion did not, evidently, include the obvious conclusion required by basic science: our model is wrong. Nope, this inescapable conclusion is masked behind an appeal to additional causes. Natural variations in the Pacific Ocean kept the snowpack stable, it is asserted.

Stop right here: if your model needs to appeal to factors outside itself, factors not built into the model, that means your model is wrong. Call it incomplete if you want, but the short, English word for that state where the model does not provide useful, validated predictions is ‘wrong’. Throw it out. Build a new model that includes the newly-discovered (!) causes, if you want, make some more predictions, and see what happens. But clinging to a model that’s been proven wrong by real world data is pathetic, and patiently anti-science.

It’s not just the Western U.S. mountains that fail to validate those models. It’s not like the hundreds of different climate models floating around have some sort of sterling track record otherwise, so that we’d lose predictive power if we just tossed them all. No, they all predict that the earth would be much warmer now than it actually is. The Arctic would be ice free by 2000 2013 2016 2050. (Pro-tip: always make your predictions take place out beyond your funding cycle, to mitigate the slim chance people will remember you made them by the time the next grant proposal needs filing.)

A slightly – very slightly – more subtle point: we all know there’s such a thing as ‘natural variations’ in all sorts of areas. In practice, especially when building models, natural variations are nothing more than a collective name for causes we don’t understand well enough to build into the model. Even admitting the existence of natural variations that affect the thing being modeled that are not included in the model is to admit the model is at best incomplete.

One might leave out potential causes on the assumption that, while they might theoretically affect predictions, in practice they are not material. When we say acceleration under gravity at the earth’s surface is 32’sec^2, we leave out air resistance (and air pressure variations, and humidity, and no doubt a bunch of other things) because that formula has proven to be useful quite a bit of the time. Only in very fussy situations do we need something else, as long as we’re testing near the earth’s surface.

We know we can ignore some complexities only because we used our model to predict outcomes, measured those outcomes and found them good enough. To admit there are natural variations that render a model’s predictions useless is to admit that the phenomenon being modeled is beyond our skill as modelers. No amount of statistical sleight of hand can make this go away.

Another issue is the baseline question: this study considers 35 years of data. With few exceptions, the mountains of the Western U.S. have been there, experiencing snowpack and natural variations, for at least several hundred thousand times that long. This data covers something less than 0.00001% of the potential dataset.

Well? Is that enough? Can we justify any conclusions drawn from such a tiny sample? Can we say with any confidence what ‘average’ or ‘normal’ conditions are for snowpack in these mountains? The natural variations we know about include ice ages, glaciers and glacial lakes. Precipitation levels almost certainly vary wildly over thousands, let alone millions, of years. On what basis should we conclude that the snowpack should stay the same, grow, shrink or do anything in particular over a given 35 year period?

Enough. The monotonous similarity of these sorts of “studies” in their steadfast resistance to apply even a little basic science or common sense to their analyses tires me.

Education History: Biases & Method

Chesterton mentions somewhere that a hersey most often takes something true, but takes it way too far. Individual experiences and the biases acquired therefrom do color every person’s take on the world – this much is indisputably true. Take this simple and universally recognized truth far too far, and each person is an island unto himself, constitutionally incapable of understanding anyone else. Efforts to limit the fracturing and fragmentation so that some collectives –  race, sex, class, etc. – can be used to destroy others – nation, village, parish, and family – are doomed by the gravity of the implicit logic. There’s no stopping at any group with more than one member, since each member is unique. Heck, we even sometimes read how the human body is a collective of sorts. I wonder if my – oops, sorry for the possessive there – mitochondria should have the franchise? They’d probably vote to put me on a diet. 

But we are not this hopeless. We know that communication is possible, and takes place all the time. Now, for instance. This fact of constant mundane communication contradicts the atomizing assumptions which before our eyes drive their victims through identity politics before reaching the inevitable end, with wimpers, bangs, howls, and tears, in solipsistic narcissism. Rather than finding true humility in seeing our tiny place in a great and beautiful creation, we can keep looking inward until we are all we see. Lacking any context, we hear that subtle internal whisper that we are not all that, are not in ourselves enough. That we’re just not all that interesting. 

Wow, that got a little out of hand. That little patch of prose was intended as a much briefer introduction to an attempt to list my own biases and methods. As I begin to resubmerge myself into the education history readings, I’m trying to be honest about how I lean. This has been occasioned by contact with real, certified education historians. I criticise these folks for holding positions or basing arguments in my opinion not supported by anything real besides what they want to believe. I, of course, believe I’ve come to the positions I hold by sheer exercise of pure intellection applied to cold, hard facts.

I crack me up. 

So, I hope this exercise will help me clarify where I’m coming from, and make headway, however slight, toward a more objective view of the materials and the real world.  

A. Current K-12 education is worthless, at best. I reach this conclusion based on my own experience as well as what the founding thinkers said about the goals of primary education. For example: 

  • By 5th grade, I personally did as little schoolwork as possible, and continued that practice through my first 2 1/2 years of college. While I regret not taking full advantage of college, I don’t think I missed anything of value K-12. Far, far better for me would have been the friendship of truly educated people plus free time. Totally lacking the first and having school cut into the second, things for me personally could have been much better. But school was not going to make it so. 
  • With this in mind, we investigated alternatives for our own 5 kids. None of them ever attended a K-12 graded classroom school. None of them ever took a class they didn’t want to take until college. They didn’t do tests or homework unless they wanted to, such as for a class they signed up for at the local community college. We also surrounded them with books and intelligent conversation, and emphasized that they were responsible for their lives from a very early age.  So far, one summa cum laude college grad in a double major, 2 more to graduate from Great Books schools this spring. So no K-12 didn’t hurt them, either. 
  • The advocates for state-run compulsory schooling, from at least Luther, through Fichte and Mann, not to mention influential communists like Dewey and Freire, show in their own writings little if any interest in teaching kids much of anything any responsible parent would want taught to them. The first group wanted to produce good, obedient Protestants who would do as they were told – it’s really that baldly obvious from their own words – while the Communists want to produce revolutionaries, to hell with academics (Dewey dissembles; Freire, required reading in all our best ed schools, is perfectly clear on this point).
  • Finally, people have found any number of approaches to educating their kids, from tutors and apprenticeships and one-room schools, to a dozen others. To assert that darkness and chaos will descend upon the land if we were to end all compulsory education is fear mongering, all the more powerful as the products of graded schools accept a self-image as defined by their relationship to school: rebel, “good student,” “bad student,”.drop out, success, failure. It would be psychically painful to recognize that all such classifications are bogus. You are not defined by how you were viewed in a graded classroom.  

B. All claims that the graded classroom model is ‘scientific’ or in any way has been shown to be superior to any other way are bald-faced lies unsupported by any science remotely worthy of the name. 

If you think not, I’d be happy to review any scientific evidence to the contrary. Shouldn’t take long, since there is none. You think the likes of Mann did studies, comparing different ways of educating kids, and settled on the compulsory age-graded classroom model because it produced the best results? 

He did not, neither did anyone else. (Hint: who would pay for such studies? Who would be the peers that reviewed it?) 

C. When it comes to compulsory state education, it is always assumed to be the cause of any good that follows. In almost every case I can think of, schooling as an effect is at least as likely as schooling as a cause. For example, general prosperity in America increased after compulsory schooling was established. Well? We are to assume schooling *caused* prosperity. But it’s at least as likely prosperity, which frees up children from having to work long hours to support the family, resulted in more schooling being a real possibility. 

The most inane and frankly idiotic idea along these lines is that education creates jobs, you know, such that getting a college education somehow creates the job that all but guarantees a good economic life, while dropping out of high school all but condemns one to poverty. How about a booming economy, which by the way boomed pretty darn well under Truman when he and all those workers getting jobs and raises had at most a highschool education, causing the affluence needed to send ridiculous numbers of kids to college? Where they not only don’t improve their economic chances (except with degrees in a few technical/professional fields) but acquire debt and, with a growing number of degrees, render themselves unemployable anywhere outside of academia? 

D. I’m biased toward source materials. I’m not all that interested in reading what modern products of modern schooling have to say about modern schooling. That’s like asking the Chevy dealer about what’s good about Chevys and bad about Fords. Instead, I like old stuff, if for no other reasons that the authors are closer to the events and are less likely to share modern prejudices. Looked at another way: I hate chronological snobbery, the assumption at the core of Progressivism that people today are 200 years smarter than people 200 years ago, such that modern opinions are to be accepted simply on the basis they are modern. Reading the Great Books and observing my contemporaries has violently disabused me of this mistake. 

E. I’m Catholic, and not just in the cultural sense. I like going to Mass and embrace with near-desperation the Magisterium (you get a load of what those folks who appoint themselves a Magisterium of one claim to believe? Or those who grant it to some Jesuit who won’t threaten their lifestyle? A bad pope is much to be preferred over no pope at all). I’m painfully aware of the Church’s shortcomings, but the Church’s Magisterial support doesn’t count as a disqualifying mark. Neither do I see the need to pretend, as is all the rage in certain circles, that the Church has not been hated in America from Day 1, and that outside the followers of Fr. Turtleneck, we are still hated to this day. 

Enough for now. 

Image result for prison buildings
If I told you this was a spanking-new school building, you’d believe it? It’s an Austrian prison.
Related image
This, on the other hand is a school. 

Science & Religion: The Difference

The horse that won’t stay dead no matter how hard we beat it. 

Here are some examples: 

I think the preponderance of evidence strongly supports the idea that species arise over time as a result of differentiated survival rates among members of a population with different characteristics.

This is a scientific judgement. 

I believe in evolution. 

This is an act of faith. 

Based on evidence from many sources, I think it very likely that the climate is changing, and has been changing for the hundreds of of million of years over which any evidence can be found. 

Again, a scientific judgement. 

I believe in climate change

Another act of faith. 

These examples are of a point so basic, so simple and dazzlingly obvious, that it would seem no one who has reached intellectual adolescence should need to have it made to them more than once. One reaches a scientific conclusion based on evidence and reason (and, being based on evidence and reason, such conclusions are always conditional – but that’s up one small level from what we’re talking about now). But, alas! The evidence strongly supports one or the other or a combination of two factors making this basic point obscure to many: either few reach intellectual adolescence, or many do not care to see this distinction.

Great Scott! It’s Science!

I love adolescence. Having had 4 of our kids pass from childhood to adulthood, and having one 14 year old now, I can say that one of my greatest joys as a dad has been witnessing the intellects of my own children awaken. (The most obvious step is when they start really getting jokes.) And this distinction, this idea that not every mental experience is a feeling, but that there are – yes, I’m going to say it – *higher* functions of the intellect, is a step into a larger world. A better, more interesting, world.

A step surprisingly few people take. As any perusal of the interwebs or conversations with just about anyone will quickly reveal, there are a lot of people who use faith language about what they conceive of as science. They believe in their bones that such acts of faith render them morally and *intellectually* superior to those who dispute their dogmas or even who refuse to mouth the shibboleths. (1)