Elite Certification: A Good Thing?

Pondering our certification culture. We certify everything from doctors and lawyers on one end to cosmeticians and astrologers on the other. All of this certification is putatively to protect us from ourselves, which, in itself, cannot but infantilize us. Certification also is supposed to enforce standards, such that, if I go to a certified accountant or licensed surgeon, I expect some basic standards to be met.

It should be clear that certification in itself tells us nothing about the desirability or wisdom of those standards. Both chiropractors and medical doctors are certified and licensed, yet they hold to often contradictory and antagonistic principles and practices – they can’t both be right, although they can certainly both be wrong. Certified astrologers are held to standards as well, one supposes, although one also supposes those standards have nothing to do with anything happening in the real world. But then, we Pisces tend to be skeptical…

Let us leave the fringe cases and turn to the strongest. One things doctors and lawyers share historically is low public esteem.

Now there was a woman who had been suffering from hemorrhages for twelve years; and though she had spent all she had on physicians, no one could cure her. 

Luke 8:43-48

The unfortunate woman in Luke is not an exception. The poor could not afford doctors; the desperate rich were reliably cured of some or all of their wealth, if not their physical disorders. Where I live, there’s a park preserving the large house of one of the earliest settlers to the area. He had allegedly received some 19th century medical training back East, making him the closest thing to a doctor for many miles around, and so he became the go-to guy for health issues in that rough and tumble period. He charged 1 cowhide, upfront, before he’d look at a patient, and was not apparently very much inclined to pro-bono work. He ended up with a nice big house on a nice big ranch. I doubt he had a sterling reputation among the many.

Keep in mind that these were not modern people, who seem to believe the medical profession can and should save them from every sickness and danger. No, before the last 50 years or so, people seemed to understand that bad things happen, everyone dies, and doctors can be hoped to help, but there are no guarantees. It is only since the 1950s, for example, that going to a hospital when seriously ill would generally improve your chances of survival. Before that? Pretty much hit or miss. Before 1900, pretty much miss. When our not too distant ancestors heard of somebody undergoing treatment at a hospital and coming out alive, let alone cured, that was a sensation. When somebody went to a hospital and died, that was just life – especially since all but the rich wouldn’t even think of going until they were on the verge of death anyway.

Then, confirmation bias kicks in: the stories of cures at the hands of doctors are given great weight; the inevitable deaths are dismissed as just the way things go. The point here: it is only in modern times that being a doctor became a generally respected occupation. In the middle ages, surgery was something the local barber did; the distinction (if any) between medical care and witchcraft is a fairly modern thing, and, sadly, not clear to much of the population even now. Through most of history, an experienced doctor was a big help in setting bones and treating wounds. Check this out, for example. Otherwise? Big maybe. The general impression one gets when reading literature or history from anywhere: doctors are most often portrayed as money grubbing shysters.

The low esteem in which lawyers are even now held by the public needs hardly be mentioned. It has always been thus. The sophists of the golden age of Greece were training up what we might call lawyers – masters of rhetoric and public speaking, who used their skills to gain power and manipulate people and institutions. Socrates and Plato loathed such men; I would imagine common citizens could be counted on to loath them as well. (1)

Obviously, individual doctors and lawyers can be good people. I’m here describing what might be called a marketing problem: enough people have bad enough experiences with doctors and lawyers, historically, at least, that doctors and lawyers are held at least in suspicion, if not out and out distrust.

Enter certification and licensing. From a strictly business point of view, it is important for doctors and lawyers to calm public fears about their competence and trustworthiness. As late as the 1870s, few US doctors were licensed; as late as the 1930s, medical ‘diploma mills’ were still in operation. Gradually, doctors became one of, if not the, most highly regulated profession. Today, a doctor must get a degree from a highly regulated med school, pass a state licensing requirement, and then pass boards in any specialties he’d like to practice.

It is amusing – to me, at least – to note that all this regulation and training requirements trails overall improvements in public health. In 1900, a man could expect to live about 49 years, on average, up about 10 years from 1860. While medical care may have improved over those 40 years, that period also corresponds to a massive move from the country to towns and cities. By 1900, about 50% of everybody no longer lived on farms. Farm work, especially when using animals as muscle, is very arduous and dangerous. Horses, cows, pigs can kill you. Having to perform the brutal physical labor to plant, plow, and harvest regardless of health takes a toll, a toll expressed in a much lower life expectancy. Life expectancy has increased in America as safer, less physically demanding work has replaced farming, and machines have replaced animals for farm work.

Medicine has been bringing up the rear on these trends, for the most part, for the last 150-200 years. Vaccines and antibiotics extended the lives of many millions, but would hardly make a difference if sufficient food, water, and sanitation were not also available. Heart and cancer treatment advances largely apply to the elderly, who are the majority of the sufferers and who simply weren’t there in comparable numbers 100 years ago. Medicine, like formal education, seems to be a result rather than a cause of increasing wealth.

Many people profoundly mistrust conventional medicine. (Note: I personally don’t so much mistrust modern medicine as I like to take a look at the evidence for myself. In general, I’m willing to go with what my doctors say I ought to do almost all the time. It’s not automatic, though.) That’s why homeopathy, chiropractic, and other practices have their millions of devoted followers. These are not stupid or unusually gullible people – the medical profession has earned their mistrust, and there’s plenty of anecdotal evidence to support these practices. (2) No science, as far as I can tell, but that matters little to people when somebody they know personally tells them of their wonderful experiences.

From a purely business point of view, the willingness of people to try all sorts of cures and to distrust doctors is a major problem to be solved – for the highly-trained doctors. If I’m going to spend years and a fortune getting through medical school, I’m going to need to convince people to pay me, and not that snake oil salesman! I must assume and defend my professional dignity, and find a way to denigrate the competition. Licensing creates the desired division: respectable, trained, competent doctors are *certified*; all others are frauds. That’s the marketing message, at least.

In a similar way, a lawyer wants to claim the aura of respect surrounding the never-went-to-law-school lawyer Abraham Lincoln, while at the same time embracing a licensing scheme designed to keep the likes of Honest Abe out of the profession. Both lawyers and doctors tend to be rather fiercely protective of their professional designation – doctors want to be called *Doctor*; lawyers insist on being treated with the respect presumed to be due to an *esquire*.

Of course, licensing is inevitably presented as something done, not to suppress competition and aid the professions in their quest for prestige and money, but to help and protect ‘the public’. The public is treated as a bunch of children, unable to look after themselves. While this may be true – that many people are gullible rubes – it’s not clear that a) lawyers and doctors are not equally likely to be gullible rubes themselves, and b) that the practice of licensing, especially when the state gets involved and is used to suppress competition, isn’t an ultimately irresistible temptation to abuse. In other words, I’m assuming doctors, lawyers, and other high-end professionals remain of the same species as the rest of us, subject to the same temptations and failings.

I expect that many, if not most, people would by now be horrified: I’m suggesting we might be better off without licensing requirements for doctors and lawyers? Am I a madman? First off, what I would suggest is separation of professions and state: the guilds can do what they want, as far as creating all sorts of merit badges and participation trophies – and the public get to decide how much weight to give them. If an individual want to only hire doctors who have all the approvals of the guild or not, or hire a certified lawyer or rather base his decision on whether or not that lawyer has a track record with the issues that made him want to hire a lawyer in the first place – OK. Newsflash: this is what people are doing anyway. On the doctor side, there are many homeopaths and chiropractors doing solid business; Whole Earth panders to those who think probiotics and organic food is going to heal them. Lawyers get hired by reputation or recommendation.

I repeat that I’m using lawyers and doctors as examples here, because they represent the most elite certified professions. This argument applies even more so to the more pointlessly certified. If you got the state out of the certification business, and instead let the guilds develop their own practices however they like but unenforced by the state, then people would be treated like grown-ups who can make up their own minds, rather than children who need to be protected from themselves.

The underlying problem here is the inversion of cause and effect: a world increasingly set up on the assumption that we need to be protected from ourselves creates children who never grow up. Before this eternal infantilization can be changed, we must stop reinforcing it. It is good to remember that people remain people – a situation no amount of certification can change. If we need protection from ourselves, so would doctors and lawyers. Quis custodiet ipsos custodes?

Appendix (ha!) – While searching around for some materials on this topic, came across this article from Stanford:

Licensing boom: In 1950, 73 occupations required licenses in one or more states. By 1970 that number had grown to more than 500. | Reuters/Athit Perawongmetha

from the cation to a picture accompanying the article

It’s illegal to practice medicine without a license, and that piece of paper is exceedingly hard to come by. Would-be doctors face more than a decade of training and must pass rigorous board exams. Thanks to that high bar and the steep up-front ante, there are almost no quacks in American medicine today. That’s a comforting thought when you’re sick and need to see an unfamiliar physician.

So, naturally, we take it for granted that licensing requirements — now common in skilled professions, including law, architecture, and accounting — exist to protect consumers. Indeed, that’s more or less what Stanford Graduate School of Business professor Jonathan Berk assumed when he began a theoretical study of licensing and certification in the labor market.

Instead, he and coauthor Jules van Binsbergenopen in new window of the University of Pennsylvania found exactly the opposite. As they report in a new working paper, “Regulation of Charlatans in High-Skill Professions,” their model concludes that licenses enrich the incumbent providers of a service and hurt consumers — not sometimes or in certain scenarios, but every time.

Now, to be sure, if any barber could hang up a shingle and call themself a doctor, and you unwisely decided that would be a good option for hernia surgery, you might wish there’d been more stringent regulations in place. What the analysis says is that consumers as a whole are worse off under licensing — the gains to those who benefit are far outweighed by the burden on the vast majority, who don’t.

“This result was as much a surprise to me as it is to anybody,” says Berk, the A.P. Giannini Professor of Finance. “To be honest, this is not the paper we set out to write.”

  1. It’s telling that Plato, in his Academy, would filter out candidates for his highest training – training for the gold-souled, the would-be philosopher-kings – by math skills: he believed mastery of math was solid evidence of real intellect. He attempted to filter out the glib posers, in other words – who would be perfect pupils for the sophists. I’ve gotten to know a bunch of lawyers over the years; exactly 2 were good at math. Both came to law later in life – one was in fact an elite mathematician, the other an engineer who got into patent law. Others ranged from ‘not exactly terrified of math’ to ‘cringy math-phobes’. Plato might be amused. (aside: how would I come to know their level of comfort with math? See: my career.)
  2. My oldest sister, with a master’s in chemistry and a JD, and a career chasing patents for a Big Pharma company, saw her chiropractor regularly. Whether there’s anything to the theory, there’s a lot to be said for the power of human concern and human touch. Her visits with her chiropractor were one of the few regular, positive interpersonal experiences she, house-ridden with health problems, had. Unfortunately, as she was dying and we were settling the estate, we discovered he was a major shyster. But that’s another story.

5 Levels of Risk Analysis: We’re All Going to Die

Ever watch those YouTube videos where somebody will take a topic or piece of music and go through X levels of complexity, from simplest to most sophisticated? Malaguena, as played by a beginner, intermediate, advanced, pro? Or donuts as made by a newbie, an experienced amateur, and a chef? How about we do that for risk of death?

We’ll leave off the Level 0 analysis, by far the most common – basically, I don’t understand risk and therefore will dismiss risk analysis as unimportant nit-picking, and simply take whatever steps everybody else in my social group is taking alleged to reduce whatever risk my social group decides needs reducing. (1) We will instead start with rational risk analysis.

Note: Even though the approaches proposed are, except for the first and last levels, seriously flawed when considering an individual’s risk, it is sobering that, even as flawed as they are, any attempt to understand risk of death dispassionately in our current state is an improvement. Onward:

How To Understand Your Risk of Death – 5 Levels:

Level I – Everybody dies. You are at 100% risk of death. No getting around it. Your death, my death, anybody’s death, is as certain as taxes. When, not if.

Level II – Risk of dying this year, as a member of the human race: is somewhat less than 1%, or 1 in a hundred. This means that, across the population of the planet as a whole, something less than 1 out of 100 people walking the earth or born during a given year is going to be dead before the year is out. You can see this in, for example, UN death rate data, or just applying common sense: by age 100, just about everybody is dead; therefore, the annual death rate for everybody taken together could plausibly be a smidge under 1% (It’s more complicated than that, to be sure, but in a very simple, homogenous population, getting neither bigger nor smaller, nor older nor younger year over year, it would roughly be right.)

Pro Tip: when you see very accurate numbers – not a reasonable-ish claim like ‘about 0.9%’ but a remarkably accurate-sounding numbers like ‘0.8762%’ for estimates based on huge, difficult-to-count numbers, like world population, the klaxons of your BS detectors, if working properly, should be firing on 11. You get a death rate by dividing raw numbers of deaths by a population estimate. It would be surprising, given the people, methods, and motivations involved, if world population estimates were accurate within, say, a billion people. It’s not easy counting up a population – look at how involved the US Census is. Death counts seem much more simple – but are they? In, say, South Sudan or Mongolia or the Amazon rain forests? People don’t just die off-grid, as it were? Most other nations cannot or do not put the sort of effort into it the US does, and have reasons, often, for fudging. Three or 4 decimal places of accuracy might be plausible in the US, maybe, but the world? No. All those decimal places are pretty good indicator that somebody is trying to snow you, to keep you from thinking about how much uncertainty surrounds their numbers.

Level III – Risk of dying this year, as an American: your (and my) chances of dying this year simply by dint of being American is, according to the UN, about .8977%. Just about 9 out of every thousand of us in the US are expected to die in 2021. Let’s blow those numbers out to see some totals: using the last census count of something like 332 million Americans, looks like right about 3 million Americans are expected to die in 2021 in the normal course of things (See: Level I). That does not include any effects of COVID, by the way.

Level IV – Risk of dying by age: The astute observer will have noticed that death does not come at random, usually, but rather that some people are a lot more likely to drop dead than others over any short period of time, a year, say. Age is a huge part of this, such an astute observer will observe: young people tend to think they are immortal, which is a fairly reasonable or at least understandable position, given how few of their contemporaries die in a given year. They may know that one kid who died in a car wreck, or some poor kid who killed herself, but these seem pretty special cases – *I* drive good! I don’t want to kill myself! Old people, on the other hand, see their contemporaries increasingly dropping as they age, until it finally gets them.

And the data, as collected by insurance companies for a couple centuries now, bears this out. If you make it out of infancy and don’t have a crippling disease, you are all but guaranteed to make it to 50 – over 95% of American women and 92% of American men survive to age 50. It’s only about age 56 for men and 63 for women that the chance of death per year approaches the UN overall estimated US death rate of 0.8977%.

The obvious starting point, then, for any understanding of your risk of death is: how old are you? If you’re under 50, in general, it’s very low, but then starts slowly climbing until, finally, everybody is dead – around age 110 or so. We all seem to know this and take it for granted, except when we don’t.

Level V – How healthy are you? Here’s the bottom line. Are you healthy? Then you have very little annual risk of death above the background ‘stuff happens’ level. That is not a very high level, but is never zero. Let’s look at an actuarial table, for example, this one based off the Social Security website (relevant numbers included at the bottom of this post). Remember, this is the chance of death lumping all people of the same age together regardless of health. In reality, your or my or anybody’s chance of dying in a particular year has a lot more to do with overall health first of all, and risky personal behaviors second, than age directly.

Most people get sick and die, usually, but not always, when they’re old. But it is not age, per se, that kills them. Old age eventually leads to the human body wearing out and shutting down, on the one hand, and an increase in diseases as the body weakens and cannot fight them off so well, on the other. It’s perfectly useful to say people die of old age when we are describing the outcome of this shutting down and weakening. Clearly, however, being old – being 80 or 70 or 65 – isn’t the cause of death, as plenty of people those ages are vigorous. Age is a proxy for health, useful as a generalization, not so useful in individual cases.


A. Life is a bowl of cherries. Really:

Three-in-one cherry tree, from the front yard orchard. Yes, the could be riper, but the birds are eating them as soon as they get really red. Plus, while the Bings should be almost black, the other two varieties don’t get much redder than those above. And they taste good.

A young lady we’ve known for years came by every day to feed the cat and water the gardens. She did a good job. While we were gone, the cherries hit their stride. It’s only one tree, so we’ll only get a few bowls worth per season – but fun. Next up: apricots and peaches, probably end of the month.

B. Back from the Epic Wedding Trip. 7 days, 6 nights, 4 states not counting airports and home. Some pics:

The restored and Catholicized chapel. Our son’s wedding mass is the first to have taken place in this lovely building.
The sanctuary. Much of the renovation had to do with creating a proper sanctuary, where Catholic altar and tabernacle replace Protestant pulpit and organ. The Latin is a from the life of St. Thomas Aquinas, who set his works before the tabernacle and offered them to Christ crucified. The image of Christ on the cross said: “You have written well of Me, Thomas. What would you desire as a reward?” “Only You, Lord,” Thomas responded.
This is the student center at Thomas Aquinas College New England. I don’t know that the picture captures this vibe, but I just wanted to grab a book, find a corner, and read as soon as I walked in. Cozy and scholarly at the same time.

C. In New Hampshire, the spell of the magic mask talisman has been suspended – one can go about bare-faced and walk up to people, and the gods, we have been assured, will not be offended; cross the state line into Massachusetts or Vermont, however, and the wrath of the gods will descend upon any who dare sally forth with undiapered visage.

For now. Our betters are pumping the brakes, mixing it up, because, as any animal trainer will tell you, being predictable with your rewards does not get as eager a compliance as keeping the animal guessing. To add to the hilarity: when the New Hampshire folks decided to remove restrictions, they didn’t just announce: “OK, nobody’s dying of the Coof anymore, so go ahead and take off your masks and feel free to walk up to people and shake hands.” Nope, that would be too easy. Instead, it was *scheduled* for Monday, May 31. As in:

Owner Balancing Treat On Dog's Head Causing Untold ...

D. Speaking of terrified, scientifically illiterate rabbits doing as they’re told, I’ve got a massive post to drop in the next day or two about analyzing risk. Sometimes, I think I’ve been uniquely prepared for the COVID hysteria:

  • worked in the actuarial department of a major life insurance company, picked up some basic knowledge of how risk is measured;
  • worked as an underwriter and and underwriting analyst for a few years, so I know how the pros apply those risk models;
  • used and helped design mathematical models for 25 years, and taught people how to use and understand them (I can literally say: I wrote the book (well, a fat pamphlet) on a couple fancy models used by thousands of people to do fancy financing).
  • analyzed and cleaned up data for these models so that it was useful. Unless you’ve had to do this sort of clean up on real-world data, you simply have no idea how much sheer judgement goes into what gets measured and how. E.g., financial reporting systems are about as well defined, well-tested, and well funded as any data systems anywhere. Every company has one or more, with trained professionals inputting data, and have been doing this for decades. Yet, a data dump of the raw inputs is chaotic, unclear, and confusing. The question I had: what cash flows took place when? Surprisingly hard to answer! Correcting entries are ubiquitous, and often raise their own questions. And so on.
  • read a bunch of medical studies. When our kids were babies, I, like every other new parent in America at the time, was constantly ordered and shamed to not let the baby sleep in our bed with us. But I knew that this practice, called a family bed, was common everywhere else in the world. So I searched around, found the studies, and read them. Insane. Bad methodology, dubious data, poor analysis, no criticisms and answers (meaning: a study should address the obvious criticisms and answer them – it’s called science.) Just out and out junk. Yet – and here’s the real eye opener – a protocol had been developed from these two junk studies, and every freaking pediatrician in America was pushing the no family bed nonsense. It’s Science! It’s the medical consensus! Also read a few studies on salt and blood pressure, and was likewise unimpressed. Then noted how nobody did studies on drug interactions until it was clear such interactions were killing people – who’s going to pay for such endless studies? I reached the conclusion, since backed up by all the failed attempts at replication, that medical studies are mostly – useless? Wildly overconfident? Wildly over cautious? Not to be taken at face value?

With that background, and an amateur’s love of the scientific method, I was not buying the claims of pandemic, the outputs of models, the cleanliness of the data, and the ‘logic’ for panic and lockdowns. Looking into it, it was puke-level idiocy. And yet, here we are.

E. Briggs captures a good bit of what I’m trying to say in my upcoming post on risk analysis in this week’s COVID post:

Many people sent me this Lancet note about the difference between relative and absolute risk reduction. I’ve warned us many times to use absolute numbers (in any situation, not just this), because relative numbers always exaggerate (unless one is keenly aware of the absolutes).

Here’s an example. Suppose the conditional (on certain accepted evidence) risk of getting a dread disease is 0.001, or 0.1%. A drug or vexxine is developed and it is discovered (in update evidence) the risk of getting the disease is now 0.0001, or 0.01%.

The absolute risk reduction (ARR; conditional on the given evidence) is 0.001 – 0.0001 = 0.0009, or 0.09%.

The relative risk is a ratio of the two risks, and the risk reduction ratio is 1 minus this, or 1 – 0.0001/0.001 = 0.9, or 90%.

That relative 90% reduction (RRR) sounds much more marketable than the actual 0.09% reduction; indeed, it sounds 1,000 times better!

Here from the the Lancet piece are some numbers using published results, recalling, as the authors do, that everything is conditional on the evidence, which is always changing.

Johnson & Johnson67%1.2%

For instance, the CDC says only 300 kids 0-17 died with or of coronadoom (a terrific argument kids don’t need to be vexxed). Population of this age group is about 65 million. We don’t know how many infected or exposed or this group, but you can see that differences between vaccinated and unvaccinated kids would be very small.

Read the whole thing. I only dare write anything on something the esteemable Briggs has already written on because even this level of math is off-putting to some people. I focus on the narrative part – why is it that huge reductions in risk might be meaningless, when the underlying risk is originally very small, as in the COVID risk to kids 17 and under. When pestered by a friend about why I’m not getting the vaccine, I replied: I will not take experimental drugs to lower my risk of death from COVID from something like 0.01% to 0.005%. She immediately changed to the ‘protect others’ tack, so I let it drop.

Alas! If information mattered, we wouldn’t be in the state we’re in.

F. And then there’s this. And this. I tend to go data=>analysis=>political speculation, or perhaps claims=>evidence=>reasons/explanations=>politics. Therefore, I have only really lightly touched on the politics/corruption/coup aspects of the Coronadoom – because I foolishly keep expecting people to care about the truth of the claims first. Yet ‘truth of the claims’ is nowhere to be found in the thought processes of the many, who instead substitute ‘whatever belief maintains my good standing in my group.’ Most people seem to go my social group’s position=>politics. Don’t ask why you need to raise your hand and get permission to go to the bathroom – JUST DO IT, DAMMIT! That sort of training, where group position is paramount and approval is always contingent on mindless obedience, is a large part of what got us to this point.


Our ordinary beliefs are adopted without any methodical examination. But it is the aim, and it is characteristic, of a rational mind to distinguish degrees of certainty, and to hold each judgment with the degree of confidence that it deserves, considering the evidence for and against it. It takes a long time, and much self-discipline, to make some progress toward rationality; for there are many causes of belief that are not good grounds for it—have no value as evidence. Evidence consists of (1) observation; (2) reasoning checked by observation and by logical principles; (3) memory—often inaccurate; (4) testimony—often untrustworthy, but indispensable, since all we learn from books or from other men is taken on testimony; (5) the agreement of all our results. On the other hand, belief is caused by many influences that are not evidence at all: such are (1) desire, which makes us believe in whatever serves our purpose; fear and suspicion, which (paradoxically) make us believe in whatever seems dangerous; (2) habit, which resists whatever disturbs our prejudices; (3) vanity, which delights to think oneself always right and consistent and disowns fallibility; (4) imitativeness, suggestibility, fashion, which carry us along with the crowd. All these, and nobler things, such as love and fidelity, fix our attention upon whatever seems to support our prejudices, and prevent our attending to any facts or arguments that threaten to overthrow them.

Carveth Read, Logic.

Evidence and the Right Questions

When we last left off, we were discussing claims and evidence. Now let’s talk about the quality of claims, evidence, and the relationship of claims and evidence. This it probably Part 1 – topic spiral potential: high.

To cut to the chase: a reasonable, useful claim is specific, expressed in unambiguous terms, and subject to logical and real-world contradiction.

Thus, reasonable, useful evidence addresses specific claims, according to the rules of logic.

This all may seem pedantic nonsense. If so, you will find real science is all about pedantic nonsense. Ask a scientist a simple question: what is the boiling point of water? and, if he is answering as a scientist, you will get:

  • discussions about what a state change is, energy thresholds, margins of error, and limits of observation;
  • A laundry list of conditions that affect the observed boiling point of water: air pressure, purity of the water;
  • THEN he might say: 100C, GIVEN all the definitions, conditions, and caveats listed above.

The boiling point of water is about as simple a scientific question as one can ask. See Millikan’s classic oil drop experiment (my favorite, and the fanciest experiment I’ve every personally done) for something a little more complicated. To calculate the charge of an electron, Millikan and Fletcher had to define, develop and measure a whole bunch of things, e.g., the size and mass of aerosol oil droplets and the viscosity of air (which changes with temperature and pressure). They needed to design and build a device that 1) created tiny oil droplets; 2) generated electrons in such a way that some of them would stick to those oil droplets; 3) provided a consistent, measurable way to observe the oil droplets thus created; 4) had a magnetic field of known strength that they could turn on and off at will. THEN you spend hundreds of hours (in addition to the hundreds you spent coming up with the experimental concepts and building and perfecting the device) risking blindness to gather thousands of observations.

Millikan did all that, and a ton of math, then got to say that the charge of the electron is 1.5924(17)×10−19 C (1) and collect his Nobel Prize.

In the real world, few people understand the question: what is the charge of an electron? let alone feel any need to know the answer.

A scientific claim is a claim that answers a scientific question. (I’m a regular Obvious Oscar today!) If the question itself does not go through the refining and defining required to hammer it into scientific shape, cleaning up as much as possible all ambiguities and and establishing the limits and conditions, then the pseudo-scientific claim that science has answered such a question is, and must be, wrong.

The above is a round-about way of addressing the nature of science as discussed here for a decade or more. Science, as John C. Wright points out often, is the study of the metrical properties of physical objects. If the question does not concern the measurement of the properties of something you can see, hold in your hand, smell, taste, hear – then it’s not a scientific question. Note: this does not mean your question is unimportant or wrong, merely that you’re not going to be able to use science to answer it. Most of life’s really important questions – should I ask her to marry me? what is the right thing to do? how should I spend my life? and so on – are not science questions. We have to come up with other ways to answer them.

It should be clear at this point that scientific evidence must be weighed by how well, if at all, it addresses a well formed scientific question. Badly formed or categorically wrong questions cannot, as in, CANNOT be answered scientifically. Science will not tell me if I should ask this particular woman to marry me; science has nothing to say about the proper course of action for any human decisions. Getting an ought from an is is difficult, if, indeed, the impossible can be called difficult. (That’s an Aristotle joke, there.)

Thus, people are making a categorical error when they claim to be ‘following the science’ when the do or promote actions. There’s always more to the question. Ex: if someone is injected with these chemicals, they will die. Therefore, IF we don’t want a particular someone to die, we should not inject them with these chemicals. So, do we want them to die? Are they an innocent child, or a serial killer of innocent children on death row? Could have different answers. IF the person OUGHT to die is not a question science can answer.

To be convincing or even relevant scientifically, evidence requires a good, clean scientific question to be run up against. Take an example I’ve used before: I saw a report once that a certain migratory butterfly population had decreased 87.3% (say. Numbers are for illustration only.) The obvious scientific question this ‘evidence’ would be addressing is: are there fewer butterflies of this particular type in a specified time period as opposed to another specified time period? Simply putting the question in that format should suggest the conditions and definitions needed in order to evaluate evidence, if any:

  1. How is the counting of butterflies being done?
  2. Where?
  3. When?
  4. How is the accuracy of the counting assured? (This means, in Feynman’s classic formulation, that the makers of the claim/presenters of the evidence are required by the honesty implicit in the pursuit of science to list any possible ways they can think of that their conclusions, methods, or data could be wrong.)

In turn, these questions intended to clear up and make scientific the more general question do, themselves, raise questions. And here’s the point of this exercise: if the study or report authors or claimants cannot show that they have done the thought-smithing needed to define and clarify the question they claim to be addressing, then, put bluntly, it’s not science. In the above example, at least, the ‘researchers’ would need to assure us that:

  1. Where they are looking for the butterflies is where the butterflies are – namely, that they didn’t simply take another route, or take the usual route at a different time. In other words, that their count is in fact a count that includes all the relevant butterflies.
  2. How they counted those thousands of butterflies is at all accurate.

And so on.

In the real world, the speciousness of almost all claims made in the name of Science! are not even this subtle. But I think it important to get a grip on what scientific claims, questions and evidence ought to look like.

  1. Of course, he was ‘wrong‘:

In a commencement address given at the California Institute of Technology (Caltech) in 1974 (and reprinted in Surely You’re Joking, Mr. Feynman! in 1985 as well as in The Pleasure of Finding Things Out in 1999), physicist Richard Feynman noted:

We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It’s a little bit off because he had the incorrect value for the viscosity of air. It’s interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan’s, and the next one’s a little bit bigger than that, and the next one’s a little bit bigger than that, until finally they settle down to a number which is higher.
Why didn’t they discover the new number was higher right away? It’s a thing that scientists are ashamed of—this history—because it’s apparent that people did things like this: When they got a number that was too high above Millikan’s, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that …

Your Own Lying Eyes

So, having correctly identified the COVID overreaction as fraud in March, 2020, I have not only not submitted to lockup, nor worn a mask except when needed to gain entrance to stores where I need to shop or to keep Karen from shutting down our church, nor observed the ‘social distancing’ rules, I and mine have actively sought out occasions to fraternize with people who similarly refuse to be cowed. Tends to only be a few times a month where we’ve hung out like normal people with normal people, but we’re trying.

So, I’ve noticed a couple things. Of course – duh – the people in these groups of normal people acting normally are not dying any faster or more dramatically than anyone else. If the propaganda were true, there would have to be a bunch of deaths in at least one of these groups, where many dozens of people over 60 gather regularly (I’m being vague here, for obvious reasons). I mean, we’re talking 80 year olds here, fraternizing with the other reactionaries of all ages, including smiles, hand shakes and – oh the humanity! – hugs. Over and over, week after week.

And none of them have died, and I’m pretty sure I’d have heard about it if someone had. Nobody’s been hospitalized. To all appearances, the elderly in this group are if anything more healthy than is typical of people their age.

Yet this is not evidence anyone on the terrified bunny side of the issue would admit. As unlikely as they are to acknowledge the cherry picking being done in the name of horrifying the rabbits, they are that likely to insist that this example is cherry picking on the ‘ain’t no plague’ side, that, if anything, it’s the fault of people acting normal that ‘we’ haven’t ‘defeated’ the virus yet. People refusing to be cowed into following totalitarian fantasy instructions unsupported by logic or evidence are somehow asymptomatically transferring the disease to others who then dutifully and in perfect accord with the panic die in droves, off-camera. Since we’re absolutely, dogmatically certain people are dying, and it’s clear the people immediately in front of us aren’t (at least, aren’t any more than any other year), then there must be people we never see dying someplace we haven’t been – nursing homes, for example, which were never overrun with visitors even pre-COVID, and are completely devoid of visitors now.

COVID deaths are also miraculously immune to that eternal bane of logic and science: confirmation bias. Even to suggest that confirmation bias needs to be guarded against gets one labeled a ‘denier’. The rules for filling out death certificates, which DO NOT mandate a positive test result for a COVID diagnosis, but rather encourage a COVID diagnosis if any two of the classic symptoms were present in the deceased, suggest, to put it mildly, a strong risk of confirmation bias. Since those symptoms include fever, aches, and breathing trouble, anyone who dies while showing evidence of a cold, a flu, an allergy attack, or a bout of asthma is almost guaranteed to get classified as a COVID death. It is otherwise impossible to rationally explain how, according to WHO data, no one anywhere in the world has died of the flu since March, 2020. (I heard a poor simple soul suggest that maybe the masks, lockups, and social distancing worked against the flu, even if they didn’t against the ‘Rona. In other words, this innocent was willing to accept that masks, distancing, and lockups worked against one virus but not against another that is exactly the same size and uses exactly the same transmission vectors. I didn’t even try to straighten him out.)

I know one man who had a younger, allegedly otherwise healthy relative die of COVID – 10,000 miles away. Not somebody he knew well, not somebody he’s seen in years, but somebody who was a real person to him – of course, I’m sympathetic, and said a prayer for the repose of her soul and comfort for her family. But, again – 10,000 miles away, on the ragged edges of Western medicine and of systematic reporting of numbers of any kind. Maybe this poor woman is the one in 100,000 or more healthy younger person who the Kung Flu kills. More likely, its a case of evil telephone – people are looking for COVID deaths, and so, by the time the story has been relayed a couple times, they will find them.

But that’s it, as far as my personal experience goes. A few friends and acquaintances have caught and recovered from it, with no more trouble than a typical flu. Does no one remember from the distant past of lo two years ago, when, every year without fail, we or somebody we knew caught the flu and just had a hard time shaking it? We’d get sick, feel kind of better, try to return to normal, then get hammered again? And how it would be a month or two before we finally felt 100%? Or even the common cold that took 2 weeks to shake and left us weak? No? Some other planet, then? But none of the people I know, even a few ‘high risk’ types in their 70s and 80s, has had any more than a ‘bad flu’ experience with COVID. Most shook it off faster than a typical flu – 3 days, maybe, with one ‘I’m not feeling right’ day followed by an ‘I’m pretty sick’ day followed by ‘feeling better but weak’ day. Of course, if you were already dying of something else, like the majority of nursing home patients, even this might kill you – because, if you are in a nursing home, SOMETHING IS GOING TO KILL YOU sooner rather than later.

No one I know has died of this disease; the deaths I’ve heard of from friends have all been elderly and sickly – and there’s only 3 total of those. To say an elderly, sickly person died of anything specific apart from being elderly and sickly is perverse. People get old and die – if we’re lucky, we each will get old and die. But in the current environment, it is tacitly assumed every old person would otherwise be immortal if the plague didn’t get them. I, like every sane person ever, assumes a sickly old person is going to die sooner rather than later, baring a miracle. Nursing homes are full of such people.

But back to evidence near versus evidence far. I’ve heard COVID is raging now in India. Looked it up – nope. But that won’t stop the headline writers and politicians from telling us it is. Very handy to have the latest deadly outbreak on the other side of the planet, from a nation to whom any standards of methodical reporting of anything are a bit of a British novelty, and certainly subject to more ingrained local practices. If that’s not clear: numbers coming out of India are suspect, to say the least; but what numbers have come up suggest the current ‘raging’ outbreak is still vastly smaller on a per-capita basis than any of the panic leaders in Europe. And make no allowances for confirmation bias.

So: There are those who have hardly stepped outside since March 2020. All they have, with slight apologies to Don Henley, is the word of

  • the bubble-headed bleach blond who comes on at 5.
  • Tell you all about the COVID with a gleam in her eye.
  • It’s interesting when people die – give us Dirty Fauci

Those who, like the hypochondriac who forgets not to use the arm that he says is crippled, go out often but imagine they are locking down, the lack of dead bodies on the street will go unnoticed.

Who are you going to believe, the ‘experts’ or your own lying eyes?

Pre-‘Rona. And one of the greatest guitar solos ever recorded, to boot!

Non-Scientists with Science Degrees mad at Scientists with no Science Degrees

A writer is someone who writes, right? A piano player is someone who plays the piano, a painter someone who paints. And so on. So, a scientist is someone who, well, sciences. More precisely, a scientist is someone who tries to understand the material world by applying (roughly) a Baconian approach: all theories are generated by rigorous logic with constant and inescapable reference to observations made in the real world; all theories are tested against objective reality and rejected if they fail to conform; where appropriate, structured experiments are used to tease out needed observations; no effort is spared to escape confirmation bias. Something like that.

Science used to be a little like Christianity, in the sense that ‘by their fruits you shall know them’ – Ben Franklin and Michael Faraday, to take two well-know examples, were great scientists because of their fruitful application of scientific principles. That neither had any formal training, let alone formal certification, in science was and is irrelevant.

Of course, if you want to be a nuclear physicist or a genetic engineer or any number of other highly technical fields, you will almost certainly need to get into a university program, at least to get any access to the equipment used. It’s not so much the formal education, even less the formal certification, that matters – it’s the access to the experts and the tools they use. An Einstein or a Feynman or any true creative expert are self-taught by all accounts – BUT also had extensive opportunities to rub elbows with other geniuses, with whom they could talk and to whom they could show their work. Insofar as formal education provides for these things, it is not at all to be denigrated. I am here only urging one not to mistake the container for the contents.

In the above senses, I am a very slight (and truly humble, even if it may not come off that way) scientist. I confine myself almost exclusively to checking whether the basic rules of science and logic have been followed, most specifically the rules against overstating what the evidence will support and ignoring confirmation bias. Whatever slight technical skills I have are confined to model building and data analysis.

So, it turns out, I am the enemy. Because I don’t reflexively submit to the teachings of the formally certified ‘scientists’, I’m promoting “unorthodox science online.” Here’s a link to William Brigg’s write up on an MIT study by *certified* ‘science’ critters attesting to the badthink of us troublemakers. Also, reader Billy Jack sent me this study last week, when it first came out.

The horror that somebody *not certified by the Academy* would independently apply the rules of science and thus dispute the *consensus* of said Academy is something up with which these folks will not put!

The truly chilling part: that this ‘study’ has not been roundly condemned and laughed off the stage by the real scientists at MIT – which, at least historically, has been home to plenty of them. But, follow the money – where does MIT’s funding come from?

Lysenkoism: not just a bad idea, it’s the LAW.


Digressing slightly from the last two posts on claims and evidence:

A few basic points, which failure to grasp condemns one to live in mindless fear (or mindless fearlessness, but that seems less a problem these days):

  1. Everybody has a 100% chance of dying within the next 100 years or so.
  2. Most of us, in the West, get old and weak and die starting around age 50. No, really. By age 79, half of us are dead. By 90, almost all of us are dead. By 120, all of us are.
  3. There are risks you can reduce, and risks you can’t: never swim in the ocean, and your chances of getting eaten by a shark go from microscopic to even more microscopic (but never go away entirely). Get old enough – and you die no matter what precautions you take.
  4. It’s boring, hard to avoid things that kill almost all of us: heart attack, stroke, renal failure, cancer, pneumonia. Try as you might, live as healthy as you can, if you live long enough, it’s one of those things, or something equally mundane, that will likely kill you.
  5. Then there are the rarer but still common stuff, things that kill young and old alike. Chief among these are suicide, murder, and accidents including semi-accidents like drug overdoses. A lot of people die in cars; a lot die getting routine medical care. People choke to death. People die amusing themselves – hang gliding, horseback riding, canoeing are all comparatively dangerous, as are playing football and riding a bicycle.
  6. Each of us is at some unavoidable risk of dying from anything that might kill us that is not a logical impossibility: not only might you get hit by lightening (about a 0.000078%/yr chance) or eaten by a shark (about 4 people per year worldwide*), but you could be killed by a meteorite (under a 1 in 10B chance) or crushed by having an elephant dropped on you (not going to look it up, you get the point).
  7. The risk space aliens will kill you is unknown, but might be real, because it is not a logical impossibility. Same goes for werewolf or vampire or Godzilla – they might exist, therefore, they might kill you.

Nobody I know goes around wearing a metal helmet to improve his chances of surviving getting hit by a meteorite; nobody wears shark repellant in Iowa because they might – might, tiny but non-zero chance – get attacked by a shark out in a cornfield. Somewhere in between meteorite death, which I think we can all agree is pointless to worry about, and, I don’t know, developing a meth habit, which I also think we can agree ought to be avoided, there’s some point where prudence would suggest taking reasonable, *proportionate* steps to avoid or minimize risks. Taking a shower is risky; a rubber bath mat reduces that risk to what most sane people consider acceptable. Riding a horse is risky, but it is also fun (so I am told), so people who like to ride horses try to be careful but do it anyway.

Something will kill each of us eventually. Yet we take what we think are prudent risks all the time, because we feel we need to or simply because we want to. We drive to the airport – way more dangerous than flying in a plane – because we want to go somewhere. We deem what we want as worth the risk. As Bilbo says: It’s a dangerous business, going out your door. It’s always a question of where we each draw that line – and where we place that line is almost never rational. Let’s take a stab at making it more rational.

In the modern West, your chances of dying in any given year before you’re 50 are pretty small, but start rising rapidly after that. Put another way: it’s a shock, generally, when a 25 year old dies; it’s often a relief when an 85 year old dies. Human bodies are amazingly tough, so tough that they can put up with tons of abuse – e.g., way too much fat, alcohol, drugs – for about 50 years. (See a connection, there? No?)

It is customary – ask your life insurance agent – to consider, first of all, age when trying to understand our risk of death. Then, in the context of age, we consider other ‘risk factors’ related to overall health and behaviors. This is not a perfect way to consider risk, but it’s a start, and far better than irrational fear.

After a rough first year or so, where kids born with health problems die off at a much faster rate than healthier kids, the base level risk of dying in America is very very low for the next 40 years. When people this young do die, it tends to be freakish – murder, suicide, drug overdoses, accidents – or just stupid bad luck – cancer, say. The Usual Suspect listed above start taking over about 50. By age 59 for men and 66 for women, your chance of death reaches about 1% per year – and it keeps going up from there, to about a 10% chance of dropping dead that year in your mid-80s, to 25% in your mid-90s – by which time, few if any of your friends in your age group will still be alive.

So, yes, now, let’s look at the damn virus: according to CDC data, there are no age groups under 85+ where the virus adds as much as 10% to your risk of dying, even based on their pessimistic, beyond dubious numbers.** Or, put another way, if you had a 1% chance of dying this year, the virus, at most, increases your risk to something less than a 1.1% chance of dying this year. So, think of the children: a healthy kid has something like a 0.001% or less chance of dying in a given year. COVID adds less than 0.0001% to that risk. It’s like crossing a street, or walking out the front door – sure, your risk increases, but by so little it would be insane NOT to cross the street (looking both ways) or walk out the door based solely on increased risk. If I’m very old, again, the additional risk is real (maybe) but in any event tiny – what is the difference between an 10% risk of dying and an 11% risk of dying this year if I’m 86? Should I stay locked in my house, away from my family and friends, because there’s a slightly lower chance I’ll die this year if I do? Since loneliness, fear, and depression ALSO add risk, how could it be worth it? Does it cancel out, such that the additional risk (if any) from COVID is less than or equal to the additional risk from loneliness, depression, and despair caused by being locked up like a criminal?

Why is this obvious aspect of the lockups not discussed?

Steps taken to mitigate risk are insane if the base level of the risk those steps are trying to mitigate is not taken into account, and the cost – there is ALWAYS a cost! – of the steps themselves are ignored. It has to be hammered on: the base level risk of COVID to almost everybody is microscopic; the costs of mitigation are beyond measure.

Worrying about everything that might kill you, or worrying about risks you can do little about, is insane. Panic ceases once one looks at the evidence and attempts to follow the flawed logic of the panic mongers. (ex: if the vaccines work, it doesn’t matter to anyone but me if I get it; if it matters to other people that I get it, the vaccines don’t work. Verbal gymnastics don’t change this.)

* I have in good authority – my little brothers, who are surfers – that the death by shark attack numbers are significantly understated. They point out that, every year, especially in Australia, a certain number of people out surfing just don’t come back. What happened to them? If a shark ate them and nobody saw it, they don’t show up in the shark death count. Plausible hearsay.

** What I mean, taking this from William Briggs: In 2020, while the virus ‘raged’, in no age group, excepting the very elderly, did as much as 10% of the deaths in the age group ‘involve’ COVID. So, if you were going to die in 2020, there was under a 10% chance it was going to ‘involve’ COVID. Note, as described above, how small – invisible at this scale, way, way under 1% – the risks of death from any cause whatsoever is for younger people. After kids’ first year, the death percentage isn’t big enough to even register until age 15-24; the risk from COVID is invisible until 45-54 age band, and hardly visible until the age 65-74 band – at which point, people are starting to die of all the usual causes, which is the point of this post.

Aside: accepting these numbers at face value and eyeballing it, the overall death rate for people 85 and over is about 17%. Prior to COVID, the CDC’s reported historic death rate for this age range was about 14% (I saw both 13.8% and 14% reported.) Again eyeballing it, looks like the COVID death rate for this group is 3% – so it adds up. Except there pretty much must be additional deaths in this age range due to nursing home neglect, fear, deferred or skipped medical treatment, and delayed diagnoses of treatable problems (and perhaps suicide, murder, euthanasia, and drug overdoses). Where are those deaths hiding? We’ll probably never know.

Evidence versus Evidence

Last post, the distinction between a claim and evidence for that claim was drawn. What was not stated, but I hope was evident, is that claims are cheap. Any claim carries, or should carry, little to no weight in and of itself. Only slightly less evident, I hope, was that evidence is almost as cheap as claims. There will almost always be evidence of some sort for just about any claim anyone makes. We all know or have heard about some guy who did this and lived, or did that and died! We have dramatic evidence – real evidence! – this saves lives, and that will kill you! For many claims, it’s not the lack of evidence that is the problem, but the quality of the evidence.

To continue with the previous example: I claim, like just about anyone in Western History who bothered to have an opinion on it, that heavy things fall faster than light things. My evidence: just drop a good-sized rock and a feather at the same time from the same height. Well? Doesn’t the rock reach the ground faster? You want more evidence? Try a leaf and a bowling ball, a piece of paper and a hammer – see? My claim holds true across a good range of heavy and light objects. You’d have to be a real nitpicker to argue with my claim, based on all the evidence I’ve presented.

Try a one-pound rock and a 5-pound rock, you say? Why? Isn’t my evidence good enough?

Big Rock.
Little Rock

A couple of points here: Why would you argue about the speed at which things fall? A theory about how fast things fall just doesn’t figure into our lives very much, if ever. So, for most of us almost all the time, the feather/rock evidence we started with seems perfectly reasonable – and it is evidence! If this stuff were obvious – and important – people would have worked it out a lot earlier than the 17th century. But it is neither obvious nor important to almost everyone almost all the time.

If more or less convincing evidence can generally be found to support any claim, it should be clear that evidence for the opposite claim can just as easily be found. We are stuck, often, deciding between sets of contradictory evidence.

A feather does really fall more slowly than your average rock. That’s real evidence. But two rocks, say a 1 pound and a 10 pound rock, hit the ground at about the same time. That’s real evidence, too! So, what is it? Clearly, some objects fall faster than others, except when they don’t.

The first question: Are we really interested in the answer? Or are we going to roll our eyes in disgust of all this nitpicking, and believe, in Morpheus’s memorable phrase, whatever we want to believe? Most investigations by most people seem to stop here.

Second question, if we soldier on: are we asking the right questions? What is it we really want to know?

We’ll take up the quality of the questions next.