Watch This (While You Can)

Dr. Briggs discusses models:

Key points:

  1. Models tell you only what you tell them to tell you.
  2. Solutions, reactions, decisions, and so on, related to model output, are also models
  3. Ferguson, the professional panic-monger who started all this with his insane, unvetted model, has a 20 year history of promoting panic via insane, unvetted models that fail to match reality.
  4. If you do not run your model against independent data collected from objective reality, it holds no weight. It means nothing.
  5. Common knowledge among modelers: you can always build a model for any set of test data that matches the test data perfectly – and this means nothing, except that you need to run it up against reality.

And, on the Kung Flu specifically:

  1. When the models that predicted coronadoom ran up against reality, they all proved disastrously, ridiculously wrong.
  2. A case used to mean ‘someone with symptoms who needs medical care’, not ‘someone with a positive test result, but no symptoms or need for medical care.’ Counting cases is misleading at best.

And here’s my favorite graph, from the CDC data:

Given the time frame presented, it looks like the numbers are annualized – they’d look a lot ‘hairier’ if the reporting was done weekly.

Deaths peaked during the Great Depression, possibly because financial panics are stressful, and stress kills, then slowly declined until about 2010, when it began to slowly rise again. Look at that nasty uptick in deaths at the end there! Oh no! Based on where the uptick began, it looks like the Coronadoom started taking out people in 2014 – the damn thing can time travel? Is there no evil this virus can’t do? Compare this to the UN data and projections since 1950, which specifically does not take the Coronadoom into account:

A little extra forecast numbers and rate of change percentages, but it covers the 1950-2019 actuals like the last graph.

You will note that the UN projected an increasing death rate starting around 2013, 7 years before the COVID panic was a gleam in Ferguson’s eye. Why? As the US populations ages, a larger percentage of people will die each year. The US population has historically skewed younger: immigrants tend to be younger, and when the birth rate is higher than replacement rate, more babies enter the population than oldsters exit it. But now the US population is getting older, as fewer babies are born. Therefore, the death rate will rise. Not rocket science.

Now let’s break out the two relevant sections of the two graphs, the annual death rates from 1950 – now, in the CDC numbers, we can see the dramatic uptick in deaths caused by the most horriblest viral outbreak in History that we’re now in the middle of, versus the UN numbers which exclude the effects of the pandemic. Top graph is actuals including the Kung Flu deaths; the lower is projections without Kung Flu, as the banner says:

I flattened this graph out a little bit to put the scale of the y-axis more in line with the previous graph.

Wow, just looking at the graphs! The supposed massive uptick in death caused by the Cornonadoom IS INVISIBLE. Put it the other way around: if 400K more people had died in 2020 than was expected, the 2020 death percentage would go from 0.888 to 1.012. (.888 X 330M = 2.94M; add 400K dead, it would be 3.34M dead; divide by 330M = 1.012 death percentage.) * It would look like this:

Just taking the UN Graph, putting a point at about 10.12 at the end of 2020, and drawing a line to it from about the end of 2019.

So the CDC graph (which I didn’t use because the y-axis scale is inconvenient for this purpose, but the graphs are the same) should show a dramatic increase over the UN projections if, in fact, the virus killed a bunch of people. But it doesn’t look like this at all! It looks like NOBODY MUCH EXTRA DIED IN 2020.

One of my complaints with the numbers and reports coming out of the CDC is how needlessly complex (and difficult to find) they are. ‘Excess deaths’ are calculated on a weekly basis using all kinds of assumptions and math that make answering the simple question: how many more people have died this year than last? or, better, how many more people died in 2020 than could reasonably be expected? needlessly difficult to answer. Why is a simple total of deaths in 2019 versus deaths in 2020 not posted on the CDC’s home page? Seriously, why, if we’re supposedly in the middle of a pandemic, not post totals so that people can get a feel for the magnitude of the problem?

Unless, of course, there isn’t a problem. Or rather, if people knew, there would be a big problem for certain people.

*Note – it’s estimates all the way down on population #, so it is simply pointless to try to get much more accurate than this.

Risk

Digressing slightly from the last two posts on claims and evidence:

A few basic points, which failure to grasp condemns one to live in mindless fear (or mindless fearlessness, but that seems less a problem these days):

  1. Everybody has a 100% chance of dying within the next 100 years or so.
  2. Most of us, in the West, get old and weak and die starting around age 50. No, really. By age 79, half of us are dead. By 90, almost all of us are dead. By 120, all of us are.
  3. There are risks you can reduce, and risks you can’t: never swim in the ocean, and your chances of getting eaten by a shark go from microscopic to even more microscopic (but never go away entirely). Get old enough – and you die no matter what precautions you take.
  4. It’s boring, hard to avoid things that kill almost all of us: heart attack, stroke, renal failure, cancer, pneumonia. Try as you might, live as healthy as you can, if you live long enough, it’s one of those things, or something equally mundane, that will likely kill you.
  5. Then there are the rarer but still common stuff, things that kill young and old alike. Chief among these are suicide, murder, and accidents including semi-accidents like drug overdoses. A lot of people die in cars; a lot die getting routine medical care. People choke to death. People die amusing themselves – hang gliding, horseback riding, canoeing are all comparatively dangerous, as are playing football and riding a bicycle.
  6. Each of us is at some unavoidable risk of dying from anything that might kill us that is not a logical impossibility: not only might you get hit by lightening (about a 0.000078%/yr chance) or eaten by a shark (about 4 people per year worldwide*), but you could be killed by a meteorite (under a 1 in 10B chance) or crushed by having an elephant dropped on you (not going to look it up, you get the point).
  7. The risk space aliens will kill you is unknown, but might be real, because it is not a logical impossibility. Same goes for werewolf or vampire or Godzilla – they might exist, therefore, they might kill you.

Nobody I know goes around wearing a metal helmet to improve his chances of surviving getting hit by a meteorite; nobody wears shark repellant in Iowa because they might – might, tiny but non-zero chance – get attacked by a shark out in a cornfield. Somewhere in between meteorite death, which I think we can all agree is pointless to worry about, and, I don’t know, developing a meth habit, which I also think we can agree ought to be avoided, there’s some point where prudence would suggest taking reasonable, *proportionate* steps to avoid or minimize risks. Taking a shower is risky; a rubber bath mat reduces that risk to what most sane people consider acceptable. Riding a horse is risky, but it is also fun (so I am told), so people who like to ride horses try to be careful but do it anyway.

Something will kill each of us eventually. Yet we take what we think are prudent risks all the time, because we feel we need to or simply because we want to. We drive to the airport – way more dangerous than flying in a plane – because we want to go somewhere. We deem what we want as worth the risk. As Bilbo says: It’s a dangerous business, going out your door. It’s always a question of where we each draw that line – and where we place that line is almost never rational. Let’s take a stab at making it more rational.

In the modern West, your chances of dying in any given year before you’re 50 are pretty small, but start rising rapidly after that. Put another way: it’s a shock, generally, when a 25 year old dies; it’s often a relief when an 85 year old dies. Human bodies are amazingly tough, so tough that they can put up with tons of abuse – e.g., way too much fat, alcohol, drugs – for about 50 years. (See a connection, there? No?)

It is customary – ask your life insurance agent – to consider, first of all, age when trying to understand our risk of death. Then, in the context of age, we consider other ‘risk factors’ related to overall health and behaviors. This is not a perfect way to consider risk, but it’s a start, and far better than irrational fear.

After a rough first year or so, where kids born with health problems die off at a much faster rate than healthier kids, the base level risk of dying in America is very very low for the next 40 years. When people this young do die, it tends to be freakish – murder, suicide, drug overdoses, accidents – or just stupid bad luck – cancer, say. The Usual Suspect listed above start taking over about 50. By age 59 for men and 66 for women, your chance of death reaches about 1% per year – and it keeps going up from there, to about a 10% chance of dropping dead that year in your mid-80s, to 25% in your mid-90s – by which time, few if any of your friends in your age group will still be alive.

So, yes, now, let’s look at the damn virus: according to CDC data, there are no age groups under 85+ where the virus adds as much as 10% to your risk of dying, even based on their pessimistic, beyond dubious numbers.** Or, put another way, if you had a 1% chance of dying this year, the virus, at most, increases your risk to something less than a 1.1% chance of dying this year. So, think of the children: a healthy kid has something like a 0.001% or less chance of dying in a given year. COVID adds less than 0.0001% to that risk. It’s like crossing a street, or walking out the front door – sure, your risk increases, but by so little it would be insane NOT to cross the street (looking both ways) or walk out the door based solely on increased risk. If I’m very old, again, the additional risk is real (maybe) but in any event tiny – what is the difference between an 10% risk of dying and an 11% risk of dying this year if I’m 86? Should I stay locked in my house, away from my family and friends, because there’s a slightly lower chance I’ll die this year if I do? Since loneliness, fear, and depression ALSO add risk, how could it be worth it? Does it cancel out, such that the additional risk (if any) from COVID is less than or equal to the additional risk from loneliness, depression, and despair caused by being locked up like a criminal?

Why is this obvious aspect of the lockups not discussed?

Steps taken to mitigate risk are insane if the base level of the risk those steps are trying to mitigate is not taken into account, and the cost – there is ALWAYS a cost! – of the steps themselves are ignored. It has to be hammered on: the base level risk of COVID to almost everybody is microscopic; the costs of mitigation are beyond measure.

Worrying about everything that might kill you, or worrying about risks you can do little about, is insane. Panic ceases once one looks at the evidence and attempts to follow the flawed logic of the panic mongers. (ex: if the vaccines work, it doesn’t matter to anyone but me if I get it; if it matters to other people that I get it, the vaccines don’t work. Verbal gymnastics don’t change this.)

* I have in good authority – my little brothers, who are surfers – that the death by shark attack numbers are significantly understated. They point out that, every year, especially in Australia, a certain number of people out surfing just don’t come back. What happened to them? If a shark ate them and nobody saw it, they don’t show up in the shark death count. Plausible hearsay.

** What I mean, taking this from William Briggs: In 2020, while the virus ‘raged’, in no age group, excepting the very elderly, did as much as 10% of the deaths in the age group ‘involve’ COVID. So, if you were going to die in 2020, there was under a 10% chance it was going to ‘involve’ COVID. Note, as described above, how small – invisible at this scale, way, way under 1% – the risks of death from any cause whatsoever is for younger people. After kids’ first year, the death percentage isn’t big enough to even register until age 15-24; the risk from COVID is invisible until 45-54 age band, and hardly visible until the age 65-74 band – at which point, people are starting to die of all the usual causes, which is the point of this post.

Aside: accepting these numbers at face value and eyeballing it, the overall death rate for people 85 and over is about 17%. Prior to COVID, the CDC’s reported historic death rate for this age range was about 14% (I saw both 13.8% and 14% reported.) Again eyeballing it, looks like the COVID death rate for this group is 3% – so it adds up. Except there pretty much must be additional deaths in this age range due to nursing home neglect, fear, deferred or skipped medical treatment, and delayed diagnoses of treatable problems (and perhaps suicide, murder, euthanasia, and drug overdoses). Where are those deaths hiding? We’ll probably never know.

Evidence versus Evidence

Last post, the distinction between a claim and evidence for that claim was drawn. What was not stated, but I hope was evident, is that claims are cheap. Any claim carries, or should carry, little to no weight in and of itself. Only slightly less evident, I hope, was that evidence is almost as cheap as claims. There will almost always be evidence of some sort for just about any claim anyone makes. We all know or have heard about some guy who did this and lived, or did that and died! We have dramatic evidence – real evidence! – this saves lives, and that will kill you! For many claims, it’s not the lack of evidence that is the problem, but the quality of the evidence.

To continue with the previous example: I claim, like just about anyone in Western History who bothered to have an opinion on it, that heavy things fall faster than light things. My evidence: just drop a good-sized rock and a feather at the same time from the same height. Well? Doesn’t the rock reach the ground faster? You want more evidence? Try a leaf and a bowling ball, a piece of paper and a hammer – see? My claim holds true across a good range of heavy and light objects. You’d have to be a real nitpicker to argue with my claim, based on all the evidence I’ve presented.

Try a one-pound rock and a 5-pound rock, you say? Why? Isn’t my evidence good enough?

Big Rock.
Little Rock

A couple of points here: Why would you argue about the speed at which things fall? A theory about how fast things fall just doesn’t figure into our lives very much, if ever. So, for most of us almost all the time, the feather/rock evidence we started with seems perfectly reasonable – and it is evidence! If this stuff were obvious – and important – people would have worked it out a lot earlier than the 17th century. But it is neither obvious nor important to almost everyone almost all the time.

If more or less convincing evidence can generally be found to support any claim, it should be clear that evidence for the opposite claim can just as easily be found. We are stuck, often, deciding between sets of contradictory evidence.

A feather does really fall more slowly than your average rock. That’s real evidence. But two rocks, say a 1 pound and a 10 pound rock, hit the ground at about the same time. That’s real evidence, too! So, what is it? Clearly, some objects fall faster than others, except when they don’t.

The first question: Are we really interested in the answer? Or are we going to roll our eyes in disgust of all this nitpicking, and believe, in Morpheus’s memorable phrase, whatever we want to believe? Most investigations by most people seem to stop here.

Second question, if we soldier on: are we asking the right questions? What is it we really want to know?

We’ll take up the quality of the questions next.

Some Saturday Links:

Start with something fun: this is Leonid & Friends, the band formed by Leonid Vorobyev, a musician from Russia, upon his retirement a few years ago. I’ve linked in the past to their insanely excellent Chicago covers. Leonid seems to know a large number of incredible session players. This is an original.

Aside: Ksenia, the lead singer, and Igor, the (insanely great) drummer, were the inspiration for two bits of flash fiction: Pig Farmer 1 and 2. One of the first and most profound ‘life is not fair’ moments for me was learning, as a child, that physical beauty and talent correlate pretty highly with intellectual talent. We’d love to believe in the dumb jock or airhead actress stereotypes – and some do exist – but the reality is not that fair: the high school quarterback and prom queen are more likely to be intellectually gifted than the typical high schooler.

I mention this because Ksenia not only has an angelic voice and looks like what Barbie would want to look like if she had better taste, but she also speaks and sings in a bunch of languages, like English, Mandarin, and Italian, and is otherwise insanely accomplished. She’s not a native English speaker, but you’d be hard pressed to tell that from this song. Life is truly not fair.

On a less fun note, here is someone demonstrating how to get around the algorithms:

At least, the videos are still up.

William Briggs was part of a planned online conference:

Conservative Catholics are readying for their Truth Over Fear Summit that will begin on Friday, Apr. 30 and extend through the weekend.

The event is described as “a three-day online gathering of 40+ frontline doctors, scientists, attorneys, researchers, and journalists, who will share invaluable and eye-opening insights into the truth behind the headlines, Covid-19, the rushed vaccine, and the Great Reset.”

Once they got the conference going, Kartra, the service they used, shut them down. Unannounced.

Apparently, as they were doing the summit on Friday, the host (says the organizer) “Kartra killed the event—live—during the Q&A with Dr Scott Jensen, who is running for Governor of Minnesota.” Boom, gone. Now I have 42,000+ people texting and emailing about what happened.

So, actual credentialed experts want to discuss issues that fall within their areas of expertise – and that’s not allowed. Briggs says it’s been rescheduled for next weekend, but the Kartra account linked to has been deactivated.

Now, back to writing…

The Manhole Cover: A Feel for Numbers

It’s possible to be fairly proficient in solving equations yet have little or no feel for what numbers mean. In fact, it seems to be fairly common. Not as common as having neither the proficiency nor the feel, but still pretty common. Like Plato musing on virtue, I’m not sure having a feel for what numbers mean is something that can be taught. It certainly can be developed, if it exists, but I suspect it’s a bit like color-blindness: one simply can’t explain the difference between green and red to someone who can’t see it. But, fools rush in:

(Disclaimer: I am so slight a math guy that I’d probably flunk anything fancier than a high-school algebra test if I had to take it right now, and I belong nowhere near people with a truly well-developed feel for numbers. But I have a little, and it has served me well, so here goes.)

Say I make manhole covers: circular chunks of metal heavy enough to drive a truck over.

To keep the numbers super simple, let’s say I get an order for a huge cover 1 meter across. Its diameter is 1 meter, in other words. Let’s say I want to know how big around – the circumference – my manhole is going to be, once I make it. (1)

So, this being me, I first look up the formula, just to make sure: C = πd. The circumference equals the diameter times π. All that’s left is to plug in some numbers.

d = 1

π = 3.14159….

So, here’s where the feel for numbers comes in: Since π is an irrational number with an infinite number of decimal places, and, assuming I’m doing this by hand, I’m going to need to decide how many decimal places to use.

In other words, for my purposes, are 5 decimal places (.14159) enough? Too many? Off the cuff, I’d probably go ahead and use 3.14159, knowing that that’s way overkill for my purposes. I strongly suspect 3.142 would be plenty close. I much more than suspect it, since the answer to my question is obvious upon inspection: the circumference is 3.14159… meters.

Let’s break it down:

  • The ‘3’ gives me 3 meters. Whatever error I’m introducing by leaving off all the decimal places is less than a meter – just looking at it, it’s 0.14159 meters, in fact. But that would still be about a 150 centimeters shortfall, which is kind of a lot….
  • So I could go with 3.1. That gives me a 3.1 meter circumference, off by a little more than 4 centimeters (4.159 centimeters = a little over 1.6 inches). OK, so maybe a little more accurate than that?
  • Using 3.14, I get 3.14 meters, which is off by 1.59… millimeters – I’ll be short about one 1/16th of an inch, if I round to 2 decimal places. That’s plenty close for a freaking manhole cover.
  • If, for some crazy reason, I’m milling this manhole cover on modern milling equipment, then I might use 3.1416 (rounding up) and thus be off by 0.0004 inches – 4/10,000th/in. Modern milling equipment can easily do that. Why they’d want to in this case is unclear.

My instincts – my feel for the numbers or, more essentially, for the problem I’m trying to answer – were that maybe 3 decimal places would be more than plenty. Then, just doing it, I find 2 decimal places are more than enough for all practical purposes. I’m making a manhole cover, for heaven’s sake!

Note: All this math is not what I actually do. None of this is conscious. I just look at the problem I’m trying to address, look at the formulas involved, and it’s usually pretty clear how exact I’ll need to be. Can I be wrong? OF COURSE! My instincts have been embarrassingly wrong once or twice. But contrast that with the vast amount of time saved by having a feel for what the answer should looks like. Besides, errors in feel tend to reveal themselves almost instantly once you start working on the problem.

Super-trivial example. A big part of getting this example is recognizing that the math is simple, with no place for the numbers to ‘blow up’. The circumference gets bigger in a direct (linear) way as the the diameter gets bigger. That’s all. So, those trailing decimal places aren’t going to cause something unexpected to happen. 3.14159 is going to be plenty accurate for just about all real-world needs, and way overkill for almost all of them.

Working with financial models, one does sometimes run into cases where numbers out 4 or 5 or more decimal places really do matter, as well as cases where tiny changes cause the model to blow up – where discontinuities arise. As you change one input by a tiny amount, the outputs likewise change by a tiny amount, until, suddenly, they don’t anymore. In practice, what I sometimes saw was some tax or accounting threshold got tripped, different rules suddenly applied, and so the results changed dramatically. There are also gotchas in the math itself, where tiny changes will trigger a bifurcation in possible results, where more than one answer ‘solves’ the problem. With practice, one can get a good feel for even these sorts of issues, but more important, a feel for when you’ve passed into No Man’s Land, and your intuitions are no longer trustworthy.

Those are extreme cases, in this context. Just as how the problems people have understanding science rarely involve highly technical issues but rather basic failures in expectations and logic, most errors in assessing math seem to be much, much simpler even than this manhole cover example. If you add a number in the hundreds with a number in the thousands, your answer cannot be in the billions. Something may happen a million times, but if there are trillions of occasions when it *might* happen, it could still be an unlikely event in any one case. And so on.

This, unfortunately, needs expansion. As time permits.

  1. In the real world, assuming this is a custom manhole cover and not a mere run of the mill standard one, what I’d really do: make a 1/2 meter jig for my plasma torch, lay out the material, select a center point, and cut it out. Then, grab a grinder and clean it up. Or, better, slap the material on a CNC cutter, and push ‘go’. IOWs, I’m unlikely to care about the circumference, as it doesn’t figure into the process of making the cover. Sorry, just nerding out here…

Working

How could anyone fall for such obvious nonsense? This question, in various forms, some much less polite, has been nagging at us for decades now. Standard answers to particular incarnations of these questions have been formulated: Marxism is a revenge fantasy for people with daddy issues; years of government training produces mindless sheep by design; participation trophy culture teaches sticking to your group *is* the achievement; theories by which any personal lack of achievement or feelings of inadequacy are conclusively presumed to be somebody else’s fault appeal to many, especially grown children of divorce.

This morning, adding another divide: how you think of work. Up until about 1900, half or more of Americans lived on farms or at least in rural communities. On a farm, there is near instant feedback on many of your efforts. Didn’t feed the chickens and gather the eggs? The results of that failure will soon come home to roost. Labors and the outcomes of those labors were spread across a range of timeframes: it might take an hour to eat the green beans you just picked; a month to see what you planted growing in the garden; a season to harvest the wheat; a couple years to get to finally plant the bottom land you spend a couple years clearing, and 5 years or more before that vineyard and orchard start producing in volume.

While farming isn’t unique in this regard – any real craftsmanship has similar effort and payoff timeframes – it was formative for many millions of Americans for 200+ years. Even if we never set foot on a farm, chances are we lived among relatives who did or used to, so that the farmer’s instincts about work were something we all, or almost all, absorbed to some degree at least.

A farmer knows:

  • Many things figure into the outcome of my efforts. Some I can control, some not.
  • The number 1 thing I can control is my efforts.
  • The number 2 thing I can control is my skill level.
  • Diligent application of effort and skill tremendously improve my chances of a good outcome.
  • No effort and no skill all but guarantee a bad outcome.
  • Try as I might, sometimes things don’t work out as planned.
  • Sometimes, you get lucky. Don’t count on it.

We’ve replaced the near-universal experience of farm life with the near-universal experience of compulsory graded classroom schooling. Farmers saw, moment by moment, year by year, the direct relationship between their effort and skill and the quality of their lives. Sure, the world was then as it is now, unpredictable – unfair, one might even say – such that the race doesn’t always go to the swift, and so on. But the general pattern was unmistakable: the industrious and skillful did better, in the long run, than the lazy and stupid, in what seemed like a pretty direct proportion to how industrious and skillful one was.

Things could and did often go wrong: the rains didn’t come, or came too much, or came at the wrong time; the horse pulled up lame; bugs ate the turnips; somebody got sick and died. Even the most industrious and skillful farmer could get wiped out by disasters out of his control. For centuries, in America at least, the most common attitude seems to have been: Stuff happens. Keep your head down, say your prayers, and keep working. Keeping on is what a man or a woman worthy of the name does. (1)

I have mentioned here the big bait and switch of public education. Reaching prominence in the late 19th century and championed by William Torey Harris, and not finally ending until the 1960s under the influence of John Dewey, the sales pitch for compulsory public schools included the claim that kids – the smart ones, anyway – would need a serious education at least through high school. The key feature of this new educational standard was that Mom and Dad and the nice young lady teaching in the local one room schoolhouse would not be able to deliver it. Nope, only highly trained and skilled teachers processed through the Normal Schools could teach all that Greek, Latin, Calculus, and Science little Eta and Ira were going to need to – work in a Ford factory? 16 years after Harris was outlining his ideal curriculum, Woodrow Wilson was telling the New York City School Teachers Association:

We want one class to have a liberal education. We want another class, a very much larger class of necessity, to forgo the privilege of a liberal education and fit themselves to perform specific difficult manual tasks.

So that idea that America needed standardized, highly-trained teachers in order to produce these excellent little Hegelians was ignored by the President of Princeton when talking to these highly trained teachers, in favor of producing plenty of obedient manual workers. Inside, this is how the higher-level drones talk; the rhetoric we little people were until recently subjected to still lightly echoed Harris. Now, of course, it’s Dewey (and Frere) all the way.

Harris thought What America Needed was a bunch of well-trained Hegelians to Move Us Forward as the Spirit Unfolded Itself Through History. He was not a very practical man. Dewey, a huge fan of the Russian Revolution and Marx, stood Harris’s Hegelianism on its head, and preached a kinder, gentler education jail that would leave students stupid and compliant. (2) That’s the model we’ve been implementing since the 1960s at least.

Initially, public schools that in fact aspired to Harris’ ideal level of 1 – 12 education were created; many Catholic parochial schools attempted to follow suit. High schools – a few, at least – were becoming prep schools for admission to Harvard. The small minority of kids who did successfully attend these schools did get an education that makes modern Master’s in most fields look like finger painting by comparison.

Farmers were convinced or mollified by the claim that these modern consolidated schools were teaching the sort of things a kid needed to learn for the brave new world they would be facing. Once the Depression and the Dust Bowl and the invention of the school bus wiped out the already-dying one room schools, the one last public competitor to the One Best System For All that Fichte, Pestalozzi, Mann, etc. dreamed of imposing was removed. The pedal was taken off the gas, although the momentum seems have been enough to coast through the next 30 years still maintaining the pretext that schools were intended to make everybody elite. (One of the reasons I love Have Space Suit Will Travel is Heinlein’s brutal takedown of Kip’s public high school – written in 1958. The mask had already slipped.)

My parents were born in 1917 and 1919; dad grew up on a farm in Oklahoma, mom among Czech farmers in East Texas – although her dad was a sheet metal guy. Both had that ‘just do it’ attitude about work. Dad, who started his own sheet metal fabrication company at age 45, would remind us kids that 10-12 hours a day in the shop were still far better than farming in Oklahoma. They were a part of that huge wave of country kids who moved to the city. (My parents moved to SoCal – thank you, Lord!). I, sadly, saw but did not experience the farmer’s work ethic and feedback loop. By farming standards, I’m incredibly lazy – yet considered some sort of high-energy output machine by some of my friends. Even a little taste, it seems, leaves its mark.

It’s no coincidence that the core employees at my dad’s shop were escaped hillbillies and immigrants from Mexico. Billy Joe and Delbert and Juan and Jose (who went by John and Little Joe (being the smallest of 3 Joes working there)) shared my dad’s Just Do It farm boy approach.

Meanwhile, kids attending school succeeded by doing what they were told and regurgitating on command. When I was a kid in the 60s, it was still possible to achieve some limited objective success around the edges of school – sports did not yet hand out participation trophies, you could objectively win a pinewood derby. But, in general, there were even then no real objective measures of success within school. Indeed, real success was denigrated: we were supposed to learn to read, but, if you did, your reward was to sit in class bored out of your mind while every other kid learned to read. Clearly, ‘group cohesion’ trumped any actual achievement. Same with math, writing, EVERYTHING: one earned suspicion and soul-destroying boredom by actually promptly learning anything at school.

I have a bunch of hobbies which produce concrete results: I build stuff out of wood and bricks. As I type, I am surrounded by things I have made with my own two hands. Meanwhile, I was possibly the worst student you will ever meet. Wish I could say I was a rebel, but honestly, I was a passive-aggressive coward, constantly testing the limits of how little work one could do in school without getting into serious trouble (ans: very, very little.). Unfortunately, this has lead to my own underappreciation of mental work. Writing is just barely becoming ‘work’ to me.

But what if that’s all you’ve got? I’m thinking of several acquaintances from college who, in the unlikely event they were ever to do a physical project, would feel like brave adventurers on an anthropological expedition. Let’s go experience what it’s like for the little people! They saved their papers and projects their school as proud and admirable work products, proof that they are ‘accomplished’. Certainly, in the eyes of the school, anything else they accomplish outside school is a hobby, in no way comparable to their ‘achievements’ in school.

Here’s the distinction: I look at the dining room table I’m sitting at as I type, the brick pizza oven I built, the shed, playhouse, bookcases, fences I put up. I take a daily tour of the fruit trees I’ve planted and the garden I’ve put in. None of these things are masterpieces, some are borderline junk – but I don’t need anyone’s approval for their base existence. They speak for themselves; good, bad, or indifferent, I made them. The works of my hands, however humble, have given me more pleasure and satisfaction than any desk job or scholarly achievement. I’m primitive that way.

Meanwhile, how does a good student know they are a good student? In what sense can one be, objectively, a good student, and how does this sense line up with what it means to be a good student in the eyes of the schools? Do their papers and test scores speak for themselves?

When I look at the penultimate former president, the glorious Light Bringer, that toward which our age aspires and from which its self-image flows, I see someone whose measure of success is simply the approval of others, others who can’t help but disparage and despise those who disapprove of him. When I think of the people he grew up around, his academic commie mom and her commie parents and the sort of crowd they would hang with, and I can see O getting patted on the head and told what a good, smart boy he is – at the same time he’s cycling through fathers and father figures who can hardly be troubled to stay in touch, and a mom who does her thing without any apparent regard for what her own son wants or needs.

Whenever I’ve been part of a voluntary work party at schools or church, a critical part of how successful they have been at getting any work done is how well organized they are: are there lists of clearly-defined tasks? Some method of assigning them? Somebody who can answer questions? Lacking this management structure, work days in my experience devolve to a bunch of people standing around and a few people working. As a people, it’s not just that we don’t seem to know what to unless somebody tells us what to do, it’s that we don’t know, on a pretty deep level, if we’ve even done something unless we get that pat on the head, that gold start, that participation trophy.

  1. All that said, if we are to accept the results of the votes of people’s feet, farming sucks, at least compared to other options. Given the option, the children of farmers have voted overwhelmingly for city life, factory and office work, and an apartment in the city or house in the suburbs.
  2. Haven’t read anywhere Dewey formalizing this goal – that perhaps had to wait for Frere, another huge fan of Marx and author of texts used in Ed Schools for 50 years now. Frere says that there is no point to an education that does not radicalize the students. Reading, writing, math are a distraction from the goal of overthrowing the System, man.

Review of the Introduction to a Book

No, really. I have a weird habit, at least for my household – I almost always read all the front matter in whatever books I read. (Sometimes, as in some philosophical works, I might read it after I’ve read the work itself, if I suspect it will bias my reading – but I’ll almost always read it.) My wife, who is as OK-read (well-read is maybe a stretch for me) as I am, pretty much never does this; neither do my kids, as far as I know. You guys? Skip to the good parts, or slog through the front matter first?

For my White Handled Blade series (yea, yea, gotta write Book 1 before you can have a series, I get it) I’m reading Sir Gawain and the Green Knight, a copy of which was in the stacks. Not sure how I had not run across Burton Raffel before – or, if I did, how he failed to make an impression – but this guy, a famous translator, is a character and a half. His Introduction is about as bare-knuckles an assessment of his ‘competitors’ as I’ve ever seen. Reminded me of the old saying in academia: never is the fight more brutal than when the stakes are really low.

To sum up: he’s not real impressed by the work of other translators and commentators of Green Knight. Here are some samples, from his 31 page introduction to a 75 page poem:

It is no defense of Tolkien, Gordon, and Davis, but most literary criticism of medieval poetry suffers from just this kind of “lengthy, mostly irrelevant” insensitivity to the poem as a poem.

p. 34

Sure. A little further, Raffel goes after scholarly critics as a group, giving examples of scholarly assertions contradicted by text within a few lines of the source of the initial assertions:

But the critics’ attention span is somehow limited by their scholarship, or alternatively by their desire to assert some interpretive claim.

p. 35

He also makes charming, look how smart I am observations, such as this, after his analysis of how the Poet portrays what is going on in Gawain’s mind and soul during his temptation by the Green Knight’s wife, and how it does not admit of a simply linear understanding as we moderns might be tempted to impress on it:

The Gawain Poet plainly knows this, and just as plainly knows his Hegel-like perception of the antithesis concealed within the synthesis is the only sane way to see things. And he is phenomenally sane. He is, in fact, so powerful a literary mind that what could be a mere matter of philosophy , with a lesser writer, is transformed for him into a vital matter of literary technique.

p. 29

O, come on, Burtie, old boy, you’re just yanking chains! Don’t make me slap that smirk off your elderly (well, dead, now) face.

The whole Introduction provides the kind of background information that is reason I read introductions, so that’s good, interspersed with patches that read like Raffel getting even with people or calling them idiots. He kindly allows of one scholar’s work that “…it is not all as bad as the passages I have cited.” At another scholar’s assertions, Raffel says: “I can gape: where has the man been?” Or that the “weird” analysis of another is revealed in a passage “which speaks, unfortunately, for itself.” Did somebody steal his lunch money, or something?

The Introduction was certainly entertaining, the poem itself is wonderful, and I appreciate Raffel’s guidance in taking it seriously as a masterpiece – hard to do with the rather less structured? Logical? bits of Arthuriana I’ve read so far. Maybe I’ll review it when I’m done.

Obvious, Sublime, Ridiculous

Roundup/update:

A. AI is fundamentally a model of how humans think. It has to be, because the only example of ‘intelligence’ with which we are familiar is human intelligence. (The same can be said of the concept of ‘artificial.’) As a model, AI is going to tell us what we tell it to tell us. It simply can’t do otherwise. People who understand how models really work understand this limitation – it is obvious.

Concern over AI getting too intelligent and deciding it doesn’t need us puny humans any more is misdirected. The idea that an independent meta-human intelligence will arise, Athena-like, as an emergent property from anything we can build is fantasy. Our idea of meta-intelligence is as limited as our idea of Superman: just as Superman is, fundamentally, a man, just stronger, faster, and incorporating better versions of human tech (laser eyeballs, flight), an AI is – must be! – imagined to be fundamentally human intelligence, only more so – faster, able to process more data at a pop, able to draw connections and conclusions farther and faster. And even this remains fantasy – we have no idea how all this works, but since it does in humans, it must work in our model! The dogma that the human mind simply is a machine demands it.

Putting these two ideas together and acknowledging the limitation inherent in them: What AI may eventually produce is a very fast, very large process that will – must! – be a model of intelligence and the world as the model builders imagine those things to be. AI will produce what its builders tell it to produce.

What we need to be concerned with, then, is not some imagined mysterious, emergent power of AI that no one can control or predict; what we need to be concerned with is what the builders of AI believe and want. That’s what AI will give us. It will give us nothing else. The surprise will be for the builders, as AI demonstrates what they, the builders, truly believe and want.

Leslie Nielsen? The AI running Robbie the Robot seems very human in this classic retelling of Shakespeare’s the Tempest.
How did Anne Francis never get cast as Catwoman? Where was I? Oh, yea, AI…

B. In traditional, by which I mean, obsolete, warfare, an aircraft carrier is the bee’s knees: one modern carrier projects force like nobody’s business. Trouble is, those suckers are expensive: the USS Gerald R. Ford ran a sweet $13 billion to build. And, to make matters worse, a single cruise missile can sink one – Tomahawk cruise missiles, for example, only cost $1.9 million each. You could determine that you needed to launch 1,000 cruise missiles at the Gerald R. Ford to make sure one got through to sink it – and have spent only a bit over 10% of the cost of the carrier to eliminate it. And there are other ways of taking out carriers, such as submarine attack, which are similarly cheaper than building one in the first place.

Knowing this, no carriers go galivanting about unaccompanied. Carriers travel in carrier groups, which include destroyers, frigates, a guided missile cruiser, sometimes submarines – which, all in, will run you $20-$30 billion per group to build, and billions more per year to operate. The main goal of the carrier group is to keep the carrier from getting sunk. So, now, you’ve invested $20-$30 billion, plus billions more per year in operating costs, just to be able to project force along the world’s coasts.

If you wanted to sink a carrier, and had 1,000 cruise missiles at you disposal, and the carrier group was an astounding 99.9% effective in stopping those cruise missiles – you win. But it’s way worse than that:

“The exercise was called Millennium Challenge 2002,” Blake Stilwell wrote for We Are the Mighty.

It was designed by the Joint Forces Command over the course of two years. It had 13,500 participants, numerous live and simulated training sites, and was supposed to pit an Iran-like Middle Eastern country against the U.S. military, which would be fielding advanced technology it didn’t plan to implement until five years later.

The war game would begin with a forced-entry exercise that included the 82nd Airborne and the 1st Marine Division. When the blue forces issued a surrender ultimatum, Van Riper, commanding the red forces, turned them down. Since the Bush Doctrine of the period included preemptive strikes against perceived enemies, Van Riper knew the blue forces would be coming for him. And they did.

But the three-star general didn’t spend 41 years in the Marine Corps by being timid. As soon as the Navy was beyond the point of no return, he hit them and hit them hard. Missiles from land-based units, civilian boats, and low-flying planes tore through the fleet as explosive-ladened speedboats decimated the Navy using suicide tactics. His code to initiate the attack was a coded message sent from the minarets of mosques at the call to prayer.

In less than 10 minutes, the whole thing was over and Lt. Gen. Paul Van Riper was victorious.

Micah Zenko provided some context in a piece for War on the Rocks. “The impact of the [opposing force’s] ability to render a U.S. carrier battle group — the centerpiece of the U.S. Navy — militarily worthless stunned most of the MC ’02 participants.”

from National Interest, Oct 15, 2019

So, in a war game, a Marine general was given the resources of an Iran-equivalent power and told to take on the combined might of a large chunk of the US Navy – and, using the few missiles at his disposal, plus suicide speedboats and civilian boats and aircraft, took them out in 10 minutes.

Lt. Gen Paul Van Riper. For real. Damn. My only issue with this: nowhere I can find listed among General Van Riper’s assets ‘armored battle goats’. Because – well, because. As hard as it is to imagine, he somehow won without them.

Um, oops. As Sun Tzu so aptly put it: to know your enemy, you must become your enemy.

No reason I’m thinking about this. What could possibly go wrong? I’m sure our current president, what with his razor sharp intellect and surrounded as he is by Top Men Humanoids, has this sort of thing completely under control, no matter who the enemy might turn out to be in this best of all possible worlds.

BBQ talking points for people working in Indigenous ...

C. Been under the weather due to circumstances well within my control that I, nevertheless, failed to control. Something about making sure prescriptions got filled before health plans flipped. Dolly Parton once quipped: “It takes a lot of money to look this cheap.” Does it take a lot of brains to be this stupid? No, I think I just have a talent for it.

But much better now! Will get back to the writing soon. No, really! Haven’t totally neglected it, but not going gangbusters, either.

D. Looking like we might have an epic fruit season out in the front yard micro-orchard. This past winter, I was better about clean-up, trimming, fertilizing, and spraying copper fungicide. Also watering a bit more, as we only had 40% of average rainfall this season:

  • Fig tree has lots of breba figs on it
  • Cherry tree has several times as many cherries as last year
  • Pomegranate just starting to bloom, looking beautiful
  • Our latest additions, two blueberry bushes, seem to be doing well – one is covered in fruit and blossoms, the other has less but is growing vigorously
Blueberries.
  • 4-in-1 pear tree, devastated last season by that loathsome leaf curl fungus, is now looking pretty good, with way, way too much fruit setting – I’m going to need to thin by about 80%!
  • My two little peach trees are doing well. Last year, one caught the leaf curl from the pear tree next to it, and lost all its fruit and leaves, but recovered enough to put out enough leaves to survive – it actually looks good, and has a fair amount of fruit on it. The other peach, a dwarf variety, is insane:
This picture doesn’t even capture how much fruit is packed onto these little branches. I’m thinning as I go, need to take more than half of them off.
  • Apricots are doing very well, too

The nicest thing: the Minneola tree our late son Andrew grew from a seed as a child is, for the first time, covered in blossoms:

You can kind of see it.

This tree is over 15 years old. Last year was the best ever – about a dozen fruit. Now, if even 10% of the blossoms set fruit, we’re looking at many dozens. The fruit is good, nice and sweet.

Andrew wrote a poem about it (it was presumed to be an orange tree at the time):

My Orange Tree by the Wall
by Andrew Moore

My orange tree by the wall
For many a spring and fall
Has grown and grown and grown
And done nothing much else at all

But then in spring one day
I shout ‘hip hip hooray!’
For blossoms it shows me
And oranges it grows me
For many a long summer day

E. Further updates as events warrant.

A Few Threads

Returning to a topic discussed previously:

The unexamined acceptance of the inevitability of Progress as an obvious unassailable fact is under discussion at Rotten Chestnuts. Starting with the Enlightenment, the notion that Change, in the form of Progress, is, so to speak, the only constant, took over polite society. So understood, Progress is not, in any rational sense, a conclusion. Progress can only be a framing devise, a filter, a way to pre-process information.

It might seem odd that an age that produced wave after wave of increasingly insane skepticism about just about everything would accept and vigorously promote as obvious the notion that Progress is a positive force governing Human Development through History. Descartes claims to doubt everything except his own existence; Hume claims to doubt cause and effect; Kant throws out the entire idea anyone can know anything about objective reality (although he says he doesn’t – he says a lot of contradictory things); Fichte simply states that all reality is subjective; Hegel denies the law of non-contradiction and all logic while claiming to be ‘scientific’.

John C. Wright speaks of how unserious philosophy became starting with the Enlightenment. A Socrates might die for his philosophy; a St. Thomas Aquinas teaches that it is in fact necessary to be willing to die for a correct philosophy. Hume famously decides to go shoot some billiards when it all becomes too much. How would anyone from Descartes on know that dying for one’s philosophy is a good thing? Severian has a page dedicated to the worst argument in the world, of which there are many variation sharing the same skeleton. This argument boils down to: we cannot know anything about things in themselves.

Yet we are to assume universal Progress, except insofar as reactionaries of one flavor or another have temporarily turned back the clock on the wrong side of History.

Here’s the thing: the only area where it can be confidently asserted that humanity has steadily progressed over the last, say, 1,000 years, is technology. Technology is undoubtedly better today than it was 10 years ago; it was better 10 years ago than it was 20 years ago; and so on, back to maybe 900 AD in the West.

Everything else? People can and have made arguments in favor of these following examples, but – clear? Beyond dispute?

  • Government “progressed” from a peak of some semblance of liberal democracy to – Pol Pot? Stalin? Mao? That’s progress?
  • Art “progressed” from Rafael to Pollock? Let alone a crucifix in a jar of urine?
  • Architecture “progressed” from Gothic to Brutalism?

And so on. Sure, there are reasonable people who will argue that Van Gogh is an improvement on Bouguereau, but they’re basically arguing on taste alone. On every technical and aesthetic basis, Bouguereau is the superior artist (and I love Van Gogh!). There are people- damaged, sad people, for the most part – who will and have argued that Brutalist architecture is superior to Gothic. There is no aesthetic of technical basis for such a claim. Rather, it seems that Progress, acting as filter, simply demands that the products of modern minds is definitionally better than the products of less progressive minds.

So, one might imagine the great Enlightenment philosophies start with technology as the basis for their claims. There is quite a bit of that early on, as where Francis Bacon says:

I am come in very truth leading to you Nature with all her children to bind her to your service and make her your slave. … [S]o may I succeed in my only earthly wish, namely to stretch the deplorably narrow limits of man’s dominion over the universe to their promised bounds.

Francis Bacon, The Masculine Birth of Time, ch. 1. (from Mike Flynn’s essay on the Masque of Science, which you all would be better off reading instead of this post.)

Bacon wants to put science -materialist science as he understood it – in the driver’s seat for pretty much all human activities. The distinction we sometimes make between science and technology seems less clear here. Nature was something to be conquered and put to use by man. In this sense, science – the study of nature in order to understand it – and technology – using that scientific knowledge to conquer and control nature – are separate only in concept: for Bacon, it would be pointless to talk of one independent of the other.

So: Bacon saw himself and other natural philosophers (scientists) as clearly progressing from his (weird caricature?) of Aristotle to the starting line of modern science. Bacon saw his efforts as the beginning of the true program of science – understanding nature so as to control it – with nothing but Progress from there on out indefinitely.

And progress was made – eventually. Bacon lived in the late 16th and early 17th century. Life expectancy in England was around 35 (high infant and young people mortality) in 1600. As a result of the Bacon-lead scientific and technological revolution, life expectancy shot all the way up to around 40 – after a mere 200 years. (The population in England in 1600 is estimated to have been about 85% of what it had been during the high middle ages 250 years earlier, before plague, famine, and increasing political unrest cut in by around 60%. It nearly doubled from 1600 to 1800, to about 50% larger than it had been in 1290.)

Maybe this conquest of Nature thing and all the improvements to human life that would follow upon it wasn’t so obvious to the little people? Who seemed to be dying as readily as before, up until the late 1700s, at any rate? But it was very striking to the better off, who could not get over it. Still can’t. Of course, technological progress kicked in like crazy once the 19th century got going, and life expectancies began to rise, to around 50 by 1900 to around 80 by 2000. That’s progress anyone who prefers not to be dead can readily see.

Our self-appointed betters seemed to have extrapolated from technological improvements, and made the categorical error of thinking that the obvious progress in technology proved that other fields, such as politics and philosophy, must also have made similar progress. Hegel, who lived from 1770 to 1831, in what was at the time the most technologically advanced culture on earth, went to far as to write a book telling us that logic, as that term was understood by everyone else, had failed to progress and was therefore clearly insufficient. Logic had remained essentially unchanged since Aristotle, unlike all other fields (besides basic arithmetic and geometry, ethics, and writing – he doesn’t mention those, IIRC) and therefore, by that fact alone, was no longer valid.

Savor this: classic Aristotelian logic, the application of which was at the core of all the scientific and technological progress made since Bacon, needed to be rejected – OK, suspended in a dialectical synthesis, which, practically, means rejected – because, and solely because, it had not changed in 2500 years. The only unalloyed and inescapable support for the notion of Progress – technology – is to be rejected – in the name of Progress.

Hegel was aware that all technology and science depended on exactly the logic he had just discarded. He graciously allows that old-timey logic might be important and useful to the little people – mathematicians, scientists, technologists – but was certainly nothing a *real * philosopher need concern himself with. Law of non-contradiction? Out! Logical arguments? Beneath a real philosopher’s dignity. Only the calculated incoherence of Hegel and those wise and enlightened souls who, naturally, agreed with Hegel, need be considered.

From this it falls, naturally, that 2+2 can indeed equal 5, if such is required by *real* philosophers like Hegel. Motte and Baily. Progress is obvious to everyone! You doubt our latest developments in Critical Theory mark the inexorable march of Progress? What? You want to go back to living in the Dark Ages, you moron?

Thus, a priori, any information that might cast a shadow on the notion that we all live right now in the Best of All Possible Worlds, until dawn tomorrow reveals and even better best, is right out. Only a reactionary Luddite would dare mention how all this Progress has some downsides, how it might even lead to something undesirable. Even worse are those (me, I hope) who reject and mock the very idea that Progress stands athwart the modern world, no feet of clay anywhere to be seen!

Enough!