Update on the Rise of Covid and the Unholy Rage for Compliance

A few days ago, David Warren published the following two quotations:

Consider these two quotations, found on the Internet, and lately, by me:

“It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgment of trusted physicians or authoritative medical guidelines. I take no pleasure in this conclusion, which I reached slowly and reluctantly over my two decades as editor of the New England Journal of Medicine.

“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness.”

The first comment was from Marcia Angell, in 2009: she provides her credential up front. The second, quite recent, is from Richard Horton, editor of the Lancet. These are two out of the two most prestigious medical journals in the world. Elsewhere, I have seen, attributed to peer-reviewed articles in general, estimates that four in five are quite worthless.

This failure of institutionalized science, this descent into darkness, is common knowledge among the tiny fraction of the population that is scientifically literate. Our only comment: Horton’s “perhaps half” should be “nearly all”. The merely procedural issues of “small sample sizes, tiny effects, invalid exploratory analyses” are dwarfed and, more important, caused, by the last two items on the list: “flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance.” It’s not (necessarily) that the credentialed scientists don’t know how to do science, it’s that they have no motivation to do anything other than to get the results their funders require – that’s the “conflict of interest” mentioned above. In plan English, scientists, operating in the more or less plausible fringes where those small sample sizes and tiny effects live, simply do whatever they need to do to keep the funding flowing.

If those scientists can’t wiggle out of it through small sample sizes and tiny effects, or cannot abuse statistical analysis to get what they want, they simply lie. It’s fraud all the way down. Few are fields where this funding blackmail isn’t in play, and few are the scientists who can defy it without destroying their careers.

I’ve been struggling and failing to come up with a short, useful definition of ‘scientific literacy’. I can list characteristics, but an overarching concept under which fall all such characteristics has evaded me. Here are some samples of the thought processes of the scientifically literate:

  1. Is this the sort of question that can be investigated scientifically even in theory? Is it the study of some metrical properties of physical objects, in other words?
  2. Can the approach being used yield results anyone should trust?
  3. Is the procedural and analytic logic solid?
  4. Is the population (of people, data, or fruit flies, etc.) a reasonable population to study for the question at hand? Any evidence of cherry-picking the population?
  5. Are problems with the data or approach acknowledged and dealt with convincingly, or are obvious problems ignored?

And so on. Let’s acknowledge one big handicap the scientifically literate have when attempting to share our criticisms of the latest cargo cult science: when you are scientifically literate, you are also careful and circumspect. Is it possible to measure human happiness by means of measurable physical characteristics? *Almost* certainly not – but those of us with lots of experience with scientific claims are going to spend that second or two trying to imagine some way such a question might be addressed scientifically. Our opponents, meanwhile, unencumbered by any knowledge and the caution such knowledge brings, are instantly dogmatic about their claims. They present our willingness to consider (however briefly) the possibility that maybe we’re missing something as weakness in our arguments.

Examples of questions which fail the first test: any of those ‘which country in the world is happiest?’ surveys regularly vomited forth, or virtually all psychological studies. In the first example, happiness is supposedly measured by administering a survey – and ‘instrument’ as the researchers would call it – to a bunch of people from different nations and across different cultures. Problem number 1 is self-reporting, where the only question you can possibly answer is: what were people willing to say on this particular survey question on this particular day and time? Utterly subjective. But this is dwarfed by the fundamental philosophic impossibility of representing happiness as a number: on a scale of 1 to 5, how happy are you? I’m cumquat happy at the moment. It’s no less idiotic to use numbers to force-rank one’s happiness (however that happiness is defined) than to use a random fruit scale. On a scale of passion fruit to bananas, just exactly how happy are you?

Psychological studies generally fail for similar reasons. Again, the scientifically literate person will perhaps spend some time contemplating how a given psychological study *might* work in some alternate universe, but the actual paper before him will not ease the reader’s concerns. This or that study might have some value, if certain issues were addressed – but they are invariably not addressed.

For an example of the second item, the first of the infamous ’70 studies’ that proved masking slowed the spread of the Coof virus was a meta analysis of data across several states that more or less vigorously enforced masking mandates. So, to the scientifically literate first wonder how 1) the particular states were chosen; 2) under what assumptions it is appropriate to compare practices and results across these states; 3) how the differences in population – age, health, comorbidities – across the states were accounted for. And many more, but these are sufficient to shoot down any study, unless the researchers can provide very solid analysis and data to allay the above concerns.

Short answer: they did none of that. There was no meaningful discussion about why these states were the appropriate representative sample of states, how exactly results across fairly vast and varied areas are to be meaningfully compared, no discussion of relative levels of compliance and enforcement of masking mandates, no allowances for confounding influences, nor was any convincing allowance for the differences in underlying populations put forth.

Instead, a small number – 1.5% if I’m remembering correctly – was put forth as, somehow (I suspect magic) representing the effectiveness of masking versus not masking. In other words, given the level of uncertainty in all the underlying measures, masks made no difference – 1.5% hardly even qualifies as noise, in context.

For the third point, we could reuse the above example – that meta analysis should have been rejected or indeed laughed out of the room. The list of problems that would need to be addressed in order to get any meaningful results from this approach is a mile long. No way a scientifically literate person accepts this.

Same with 4 and 5 – how comfortable with the data set (a few East Coast and Southern states if I’m remembering correctly) are we? And any such meta analysis should begin or at least have detailed right up front how the researchers plan to address the obvious problems with the differences in population, enforcement and compliance, measuring methodologies and so on. But they didn’t. Until we – the scientifically literate – are comfortable with such issues are being addressed, do not pass go, do not collect $100.

I, like the editor of the New England Journal of Medicine, slowly came to the conclusion that there is a problem with science. I now conclude that there is little to no chance of any given study being repeatable, careful, and rigorous enough to be considered ‘science’ in the sense that any honest man would think he is compelled to give at least conditional assent to its conclusions.

Yet, movements are afoot to use ‘science’ of the quality of the above examples, and the ever more deranged interpretations of this cargo cult ‘data’ by the True Believers and frauds, to mask us up and lock us down again.

Compliance is the goal, with the corollary of identifying the non-compliant.

We must not comply.

Author: Joseph Moore

Enough with the smarty-pants Dante quote. Just some opinionated blogger dude.

8 thoughts on “Update on the Rise of Covid and the Unholy Rage for Compliance”

    1. That’s an excellent write-up. In TOS, where evil was allowed to be evil and confronted with violence as appropriate, victory could be cheered – we were happy the bad guys lost. I admit I did not pick up on what might be called the cowboy ethic of Voyager – that ain’t right, and if I don’t stop it, it don’t get stopped. Your description is making me rethink that.

    1. Very good. Not coming up with anything better at the moment.

      Saw a cli[p where an Australian comedian got way too real: he started off saying how he regretted getting the jab, and segued into saying something to the effect of: I though I was one of those people who would have resisted the Nazis, but., clearly, I’m not. I did whatever they told me to do, just like most Germans in the 1930s.

      Way, way too on point.

  1. Mrs. Robbo has said emphatically that if St. Marie of the Blessed Educational Method reimposes mandatory vaccines, testing, and/or “distance learning,” she’s going to quit.

  2. Was driving into downtown Lansing in the early 20-aughts and spied the son of a college friend waiting for a bus, so I picked him up. Heading to his classes at the community college. How are classes? Social science is kinda lame, he says, half the ‘findings’ of social science are obvious. Yeah, I told him. So are the other half.

Leave a comment