Science, Medicine & Me (or You)

One of the problems, or challenges, if you prefer, with medicine is that few people who become doctors do it because they love science. They become doctors typically because they want to help people, a very fine reason. But this means that when the situation calls for a more scientific examination of the evidence before them, or of the value or pertinence of this or that finding or practice, your doctor is likely operating at a disadvantage.

Ultimately, tradition, law, and their insurance companies all but force them to stick to conventional, approved approaches to everything. In general, this is not a bad thing – you certainly don’t want your doctor freestyling it when it is your health on the line. If doctors everywhere treat a given constellation of symptoms in a particular way, that would probably be the place to start. But it’s not in itself science.

I recall a couple decades ago that every time we took a kid to the pediatrician – and we loved our pediatrician – she would feel compelled to advise us not to let the little ones sleep with us, family bed style. I can just hear her teachers, or her professional bulletins or her insurance company telling her it was best practice, that kids die every year smothered in a bed with an adult, and just don’t do it. Now, of course, pediatricians in the US (it was never policy in the rest of the world) have come to realize that the benefits to both the child and the parents of having the child in the parent’s bed far outweigh the miniscule risks. Such risks effectively disappear if the child and parents are healthy and the parents sober, while the sense of comfort and attachment gained by the child and the extra sleep gained by the parents is a serious win-win.

Now a scientifically inclined person might ask about the data and methodology behind the claim that hundreds of thousands of years of natural selection had somehow gotten the whole baby sleeps with parents thing wrong.  Maybe it has – but the issue should have at least been addressed. But it wasn’t, in America, at least up until a few years ago. So every American pediatrician was expected to toe the official line, and our doctors did. For, what if you hadn’t advised parents against ‘co-sleeping’ and a baby died? You’d be asking for a lawsuit.

Another example I’ve mentioned before is salt intake. This one is a little different, in that for some people, there seems to be a fairly strong correlation between salt intake and blood pressure, so at least being concerned about it isn’t crazy.

For most people, however, there is little or no correlation between salt intake and blood pressure, at least within realistic levels of consumption. In one of the earliest studies, rats were given the human equivalent of 500g of salt a day, and their blood pressure shot right up! But humans tend to consume around 8.5g of salt a day. Um, further study would be indicated?

The science would seem to support some degree of caution regarding salt intake for people with high blood pressure. Instead, what we get are blanket recommendations that everybody – everybody! – reduce salt intake. It will save lives! Medicine cries wolf. People learn to ignore medical advise. Further, Medical Science! fails to consider what it is asking for – a complete overhaul of people’s diets. Few real people are going to do this without serious motivation. Wasting ammo on a battle not worth winning.

Again, if doctors were essentially scientists attracted to medicine for all the opportunities for scientific discovery human health presents, such errors and poor judgement might be more limited. But doctors became doctors to help people, not to debate scientific findings with them. They want to DO something. Thus, conventional medical practice is full of stuff to do for every occasion. Whether or not there’s really any science behind it is not as important, it seems, as developing practices to address issues so that medicine itself can be practiced. Clearly, for the average doctor, having *something* to do is better than having nothing to do, even when that something isn’t all that well supported or even understood.

These thoughts are on my mind because of all the trouble I’m having with blood pressure medicines at the moment. Since there’s an obvious trade-off here – somewhat higher blood pressure with a higher quality of life versus acceptably lower blood pressure but with lower quality of life – I decided I needed to do a little basic research. Here’s where I’m at after a very preliminary web search:

What I’m looking for, and have so far failed to find, is a simple population level chart, showing the correlation between blood pressure and mortality/morbidity. Of course, any usefully meaningful data would be presented in a largish set of charts or tables, broken out by such variables as age, sex, and body mass index.  But I would settle at this point for any sort of data at all, showing how much risk is added by an additional 10 points or 10 percent, or however you want to measure it, above ‘normal’ blood pressure.

For example, I’m a 60 yr old man. Each year in America, some number of 60 year old men drop dead by high blood-pressure-related illnesses. OK, so, base data is at what rate do 60 year old men drop dead from high blood pressure related diseases? Let’s say it’s .1% (just making up numbers for now) or 1 out of every 1,000 60 year old men. That’s heart attack and stroke victims, with maybe a few kidney failures in there, severe enough to kill you, a 60 year old man.

Now we ask: what effect does blood pressure have on these results? Perhaps those 60 year old men running a 120/80 BP die at only a .05% rate – one out of every 2,000 drops dead from heart attacks, strokes or other stray high BP related diseases. Perhaps those with 130/90 (these results and possible ages would be banded in real life most likely, but bear with me) die at .09% rate, while those with 150/100 die at .2% rate, and those above 150/100 die at a horrifying 1% rate, or 10 times as much as the old dudes with healthy blood pressure. These numbers would all need to average out to the .1% across the population, but a high degree of variability within the population would not be unexpected.

Or maybe something else entirely.  But I have yet to find such charts. I’ve found interesting tidbits, like FDR’s BP in 1944 when his doctor examined a by that time very ill president “was 186/108. Very high and in the range where one could anticipate damage to “end-organs” such as the heart, the kidneys and the brain.” So I gather BP in that range is very bad for you, or indicates that something else very bad for you is going on (FDR had a lot of medical issues and smoked like a chimney).

Then there’s this abstract, suggesting in the conclusion that my quest is going to be frustrated:

Abstract

Objectives

Quantitative associations between prehypertension or its two separate blood pressure (BP) ranges and cardiovascular disease (CVD) or all-cause mortality have not been reliably documented. In this study, we performed a comprehensive systematic review and meta-analysis to assess these relationships from prospective cohort studies.

Methods

We conducted a comprehensive search of PubMed (1966-June 2012) and the Cochrane Library (1988-June 2012) without language restrictions. This was supplemented by review of the references in the included studies and relevant reviews identified in the search. Prospective studies were included if they reported multivariate-adjusted relative risks (RRs) and corresponding 95% confidence intervals (CIs) of CVD or all-cause mortality with respect to prehypertension or its two BP ranges (low range: 120–129/80–84 mmHg; high range: 130–139/85–89 mmHg) at baseline. Pooled RRs were estimated using a random-effects model or a fixed-effects model depending on the between-study heterogeneity.

Results

Thirteen studies met our inclusion criteria, with 870,678 participants. Prehypertension was not associated with an increased risk of all-cause mortality either in the whole prehypertension group (RR: 1.03; 95% CI: 0.91 to 1.15, P = 0.667) or in its two separate BP ranges (low-range: RR: 0.91; 95% CI: 0.81 to 1.02, P = 0.107; high range: RR: 1.00; 95% CI: 0.95 to 1.06, P = 0.951). Prehypertension was significantly associated with a greater risk of CVD mortality (RR: 1.32; 95% CI: 1.16 to 1.50, P<0.001). When analyzed separately by two BP ranges, only high range prehypertension was related to an increased risk of CVD mortality (low-range: RR: 1.10; 95% CI: 0.92 to 1.30, P = 0.287; high range: RR: 1.26; 95% CI: 1.13 to 1.41, P<0.001).

Conclusions

From the best available prospective data, prehypertension was not associated with all-cause mortality. More high quality cohort studies stratified by BP range are needed.

Ok, so here is some information. Let’s chart it out as best we can. Here is the diagnostic banding used by the medical profession here in the US. I note it is unadjusted for age or anything else, which is fine, got to start somewhere:

  • Normal blood pressure – below 120 / 80 mm Hg.
  • Prehypertension – 120-139 / 80-89 mm Hg.
  • Stage 1 hypertension – 140-159 / 90-99 mm Hg.
  • Stage 2 hypertension – 160 / 100 mm Hg or higher.

The meta-study above further divides the prehypertension range into a high and low as follows:

  • Low Prehypertension – 120–129/80–84 mmHg
  • High Prehypertension – 130–139/85–89 mmHg

This particular study does nothing with Stage 1 and 2 hypertension – too bad. But it’s mostly those prehypertension numbers I’m worried about personally. Anyway, here’s what we’ve got so far.

BP graph 1

We will here ignore what looks like a bit of statistical hoodoo – we’re blending different studies, calculating p-values and confidence intervals to the combined results – um, maybe? Perhaps if Mr. Briggs or Mr. Flynn drops by, they can give a professional opinion. Me, I’m just – cautious. So, what’s this telling us?

If I’m reading it correctly – not a given by any stretch – we’ve determined the total relative risk or RR (a term of art, but sorta means what it seems to mean) at the base state and the three partially overlapping prehypertension states based on both systolic and diastolic BP ranges, both on a ‘All Causes’ and a cardiovascular diseases basis. What this appears to say is that a meta analysis of 13 studies of nearly a million people over several decades shows that your risk of illness from any cause increases .03 RR points, or 3% over the base value, if your BP runs a little high, but that your risk of cardiovascular  disease increases 32%. Which doesn’t exactly make sense if one assumes cardiovascular diseases are part of ‘All Causes’ – and why wouldn’t they be? – unless slightly high BP somehow reduces the sum of all other risks. Also, the analysis run over the two sub-ranges of low and high prehypertension do not look like they could possibly add up to the values over the entire prehypertension range – which could well be an artifact of the statistical analysis used. If that is the case, does not logic indicate that the results are quite a bit less certain than the p-values and confidence intervals would suggest? Again, I am very much an amatuer, so I could be a million miles off, but these are the questions that occur to me.

The critical piece missing for my purposes: what scale of risk does the RR here represent? A 32% increase in a .01% chance of Bad Things Happening is hardly worth thinking about; a 32% increase in a 20% risk of Bad Things is a whole ‘nuther kettle of fish.

I’m about researched out for the moment, will continue to google around for more information when I get a moment.

UPDATE:

Is it obvious enough that I’m a LITTLE COMPULSIVE? Just found this, at the Wiley Online Library: 

BP chart 2

Don’t know if this is annualized (per year) numbers, or a total across the entire age range, that ‘all significant’ part worries me a little, but: this seems to be saying that I, a 60 year old man, could expect a 1.8% chance of cardiovascular disease (however that’s defined) if my blood systolic blood pressure falls between 120 and 139, or, more important to my purposes, minisculely more risk than if my BP was the more desirable 120.

This is in line with what one would expect from the data in the previous chart.

Still a lot more work to do here.

Advertisements

AI-yai-yai.

Henry Kissinger (yes, he’s still alive – 95 yrs old. His dad made it to 95 and his mom to 98, I think, so he may be with us even longer.) has opined that we’ve got to do something about AI:

Henry Kissinger: Will artificial intelligence mean the end of the Enlightenment?

Two thoughts: Like Hank himself, it seems the Enlightenment is, surprisingly, still kicking. Also: End the Enlightenment? Where’s the parade and party being held? Oh wait – Hank thinks that would be a bad thing. Hmmm.

Onward: Dr. K opines:

“What would be the impact on history of self-learning machines —machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? [quick hint: apparently, they do] How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them?”

Note: this moment of introspection was brought about by the development of a program that can play Go way better than people. Little background: Anybody can write a program to play tic-tac-toe, as the rules are clear, simple and very, very limiting: there are only 9 squares, so there will never be more than 9 options for any one move, and no more than 9+8+7+6+5+4+3+2+1 possible moves. A simple program can exhaust all possible moves, dictate the next move in all possible scenarios, and thus guarantee whatever outcome the game allows and the programmer wants – win or draw, in practice.

Chess, on the other hand, is much harder game, with an effectively inexhaustible number of possible moves and configurations. People have been writing chess playing programs for decades, and, a few decades ago, managed to come up with programs sophisticated enough to beat any human chess player. Grossly put, they work by a combination of heuristics used to whittle choices down to more plausible moves (any chess game contains the possibility of any number of seemingly nonsensical moves), simply brute-force playing out of possible good choices for some number of moves ahead, and refinement of algorithms based on outcomes to improve the heuristics. Since you can set two machines to play each other, or one machine to play itself, for as long or as many games as you like, the possibility arises – and seems to have taken place – that, by playing millions more games than any human could ever play, measuring the outcomes, and refining their rules for picking ‘good’ moves, computers can program themselves – can learn, as enthusiasts enthusiastically anthropomorphize – to become better chess players than any human being.

Go presents yet another level of difficulty, and it was theorized not too many years ago to not be susceptible to such brute-force solutions. A Go master can study a board mid-game, and tell you which side has the stronger position, but, legendarily, cannot provide any sort of coherent reason why that side holds an advantage. The next master, examining the same board, would, it was said, reach the same conclusion, but be able to offer no better reasons why.

At least, that was the story. Because of the even greater number of possible moves and the difficulty mid-game of assessing which side held the stronger position, it was thought that Go would not fall to machines any time soon, at least, if they used the same sort of logic used to create the chess playing programs.

Evidently, this was incorrect. So now Go has suffered the same fate as chess: the best players are not players, but machines with programs that have run through millions and millions of possible games, measured the results, programmed themselves to follow paths that generate the desired results, and so now cannot be defeated by mere mortals. (1)

But of course, the claim isn’t that AI is mastering games where the rules clearly define both all possible moves and outcomes, but rather is being applied to other fields as well.

After hearing this speech, Mr. Kissinger started to study the subject more thoroughly and learned that artificial intelligence goes far beyond automation. AI programs don’t deal only with the rationalization and improvement of means, they are also capable of establishing their own objectives, making judgments about the future and of improving themselves on the basis of their analysis of the data they acquire. This realization only caused Mr. Kissinger’s concerns to grow:

“How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?”

“Capable of establishing their own objectives” Um, what? They are programs, run on computers, according to the rules of computers. It happens all the time that following the rule set, which is understood to be necessarily imperfect in accordance with Gödel’s incompleteness theorems, computer programs will do unexpected things (although I’d bet user error, especially on the part of the people who wrote the programming languages involved, is a much bigger player in such unexpected results than Godel).

I can easily imagine that a sophisticated (read: too large to be understood by anyone and thus likely to be full of errors invisible to anyone) program might, following one set of instructions, create another set of instructions to comply with some pre existing limitation or goal that may or may not be completely defined in itself. But I’d like to see the case where a manufacturing analysis AI, for example, sets an objective such as ‘become a tulip farmer’ and starts ordering overalls and gardening spades off Amazon. Which is exactly the kind of thing a person would do, but not the kind of thing one would expect a machine to do.

On to the Enlightenment, and Hank’s concerns:

“The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”

Anyway, go watch the videos at the bottom of the article linked above. What you see are exactly the problem Dr. K is worried about – “AI developers, as inexperienced in politics and philosophy as I am in technology” – although in a more basic and relevant context. The engineer in the videos keeps saying that they wrote a program that, without any human intervention and without any priming of the pump using existing human-played games of Go, *programmed itself* from this tabla rasa point to become the (machine) Master of (human) Masters!

When, philosophically and logically, that’s not what happened at all! The rules of the game, made up by humans and vetted over centuries by humans, contain within themselves everything which could be called the game of Go in its logical form. Thus, by playing out games under those rules, the machine is not learning something new and even less creating ex nihilo – it is much more like a clock keeping time than a human exploring the possibilities of a game.

The key point is that the rules are something, and something essential. They are the formal cause of the game. The game does not exist without them. No physical manifestation of the game is the game without being a manifestation of the rules. This is exactly the kind of sophomore-level philosophy the developers behind this program can almost be guaranteed to be lacking.

(Aside: this is also what is lacking in the supposed ‘universe simply arose from nothing at the Big Bang’ argument made by New Atheists. The marvelous and vast array of rules governing even the most basic particles and their interactions must be considered ‘nothing’ for this argument to make sense. The further difficulty arises from mistaking cause for temporal cause rather than logical cause, where the lack of a ‘before’ is claimed to invalidate all claims of causality – but that’s another topic.)

The starry-eyes developers now hope to apply the algorithms written for their Go program to other areas, since they are not dependent on Go, but were written as a general solution. A general solution, I hasten (and they do not hasten) to add: with rules, procedures and outcomes as clearly and completely defined as those governing the game of Go.

Unlike Dr. Kissinger, I am not one bit sorry to see the Enlightenment, a vicious and destructive myth with a high body count and even higher level of propaganda to this day, die ASAP. I also differ in what I fear, and I think my reality-based fears are in fact connected with why I’d be happy to see the Enlightenment in the dustbin of History (hey, that’s catchy!): What’s more likely to happen is that men, enamoured of their new toy, will proceed to insist that life really is whatever they can reduce to a set of rules a machine can follow. That’s the dystopian nightmare, in which the machines merely act out the delusions of the likes of Zuckerberg.  It’s the delusions we should fear, more than the tools this generation of rootless, self-righteous zealots dream of using to enforce them.

  1. There was a period, in the 1980s if I’m remembering correctly, where the best chess playing programs could be defeated if the human opponent merely pursued a strategy of irrational but nonfatal moves: the programs, presented repeatedly with moves that defied the programs’ heuristics, would break. But that was a brief Star Trek moment in the otherwise inexorable march forward of machines conquering all tasks that can be fully defined by rules, or at least getting better at them than any human can.

Key Psychological Study is a Fraud. Who’da Thunk It?

Confession time: This is a case, somewhat, of personal confirmation bias for me. I should have read this, when I came across it years ago, with a solid double dollop of skepticism. Instead, I was too willing to just swallow it as presented as a yet another sad example of fallen human nature. Cautionary tale, folks.

This one:

The Stanford prison experiment was an attempt to investigate the psychological effects of perceived power, focusing on the struggle between prisoners and prison officers. It was conducted at Stanford University between August 14–20, 1971, by a research group led by psychology professor Philip Zimbardo using college students. It was funded by the U.S. Office of Naval Research as an investigation into the causes of difficulties between guards and prisoners in the United States Navy and United States Marine Corps. The experiment is a topic covered in most introductory (social) psychology textbooks.

Guards and prisoners had been chosen randomly from the volunteering college students. Some participants developed their roles as the officers and enforced authoritarian measures and ultimately subjected some prisoners to psychological torture. Many of the prisoners passively accepted psychological abuse and, by the officers’ request, actively harassed other prisoners who tried to stop it. Zimbardo, in his role as the superintendent, allowed abuse to continue. Two of the prisoners left mid-experiment, and the whole exercise was abandoned after six days following the objections of graduate student Christina Maslach, whom Zimbardo was dating (and later married). Certain portions of the experiment were filmed, and excerpts of footage are publicly available.

The way it’s usually presented is this experiment revealed that apparently normal people (you know, white male college students. What could be more normal than that?) harbor wellsprings of sadism that only require an opportunity to reveal themselves. Who knows what evil lurks in the hearts of (white college student) men? The Stanford Prison Experiment does! It is referenced in connection with the My Lai Massacre and the Armenian Genocide (no, really) to explain how American troops could shoot unarmed villagers and nice Turks could strip naked and crucify teenage girls.

More often these days, even the little bit of professional scientific restraint shown by psychologists is shed in favor of using this study as a stick to beat a particular drum. We’re supposed to believe that the Power Structure creates bad behavior. It’s Rousseau all over again, but now wearing the Sacred Lab Coat of Science! College students – gentle, loving college students who wouldn’t hurt a fly, no doubt –  would, in a state of nature (1) never dream of being sadistic, power-obsessed meanies, become sadistics, power-obsessed meanies once given POWER over other students.

It’s the power dynamic all the way down, man. Any time you see people acting sadistically, killing people, stuff like that, it’s really not their fault! Theories of sin or any other form of personal responsibility that place even part of the blame on the individual are WRONG. You want people to behave better, New Soviet Man style? Expecting them (me. us.) to behave isn’t going to get you anywhere. You need to destroy the Power Structure! This attitude, Marx’s simplification and streamlining of Hegel’s notion of the Spirit acting through History, effectively absolves individuals from all responsibility for Bad Stuff. If, as Hegel posits, the Spirit – God Himself! – is behind all this History, (frog) marching the World dialectically forward, then what difference does individual human actions – human will – make? The only virtue, such as it is, would be getting on the History train. You get run over otherwise. Marx’s trick is to remove the vaguely Judeo-Christian flavoured God of Hegel and just assigning agency to the not-at-all-a-God-History, who nonetheless is a jealous God one must not get on the wrong side of.

But I saw none of this clearly. Until now: 

It was late in the evening of August 16th, 1971, and twenty-two-year-old Douglas Korpi, a slim, short-statured Berkeley graduate with a mop of pale, shaggy hair, was locked in a dark closet in the basement of the Stanford psychology department, naked beneath a thin white smock bearing the number 8612, screaming his head off.

“I mean, Jesus Christ, I’m burning up inside!” he yelled, kicking furiously at the door. “Don’t you know? I want to get out! This is all f**ked up inside! I can’t stand another night! I just can’t take it anymore!”

It was a defining moment in what has become perhaps the best-known psychology study of all time….

Zimbardo, a young Stanford psychology professor, built a mock jail in the basement of Jordan Hall and stocked it with nine “prisoners,” and nine “guards,” all male, college-age respondents to a newspaper ad who were assigned their roles at random and paid a generous daily wage to participate. The senior prison “staff” consisted of Zimbardo himself and a handful of his students.

The study was supposed to last for two weeks, but after Zimbardo’s girlfriend stopped by six days in and witnessed the conditions in the “Stanford County Jail,” she convinced him to shut it down. Since then, the tale of guards run amok and terrified prisoners breaking down one by one has become world-famous, a cultural touchstone that’s been the subject of books, documentaries, and feature films — even an episode of Veronica Mars.

The SPE is often used to teach the lesson that our behavior is profoundly affected by the social roles and situations in which we find ourselves. But its deeper, more disturbing implication is that we all have a wellspring of potential sadism lurking within us, waiting to be tapped by circumstance. It has been invoked to explain the massacre at My Lai during the Vietnam War, the Armenian genocide, and the horrors of the Holocaust. And the ultimate symbol of the agony that man helplessly inflicts on his brother is Korpi’s famous breakdown, set off after only 36 hours by the cruelty of his peers.

There’s just one problem: Korpi’s breakdown was a sham.

“Anybody who is a clinician would know that I was faking,” he told me last summer, in the first extensive interview he has granted in years. “If you listen to the tape, it’s not subtle. I’m not that good at acting. I mean, I think I do a fairly good job, but I’m more hysterical than psychotic.”

Read the article.  What interest and saddens me is that the subjects of this fraud did not in fact out the dude and drag him into court for illegally imprisoning them. Why? Just a guess here: because they, too, had academic ambitions. Certainly Kopri did. Academics seem to have a certain immunity to having to behave like adults and accept consequences, because they can so easily destroy the careers of the little people under them.

So, has anybody tried to replicate this thing? Glad you asked:

According to Alex Haslam and Stephen Reicher, psychologists who co-directed an attempted replication of the Stanford prison experiment in Great Britain in 2001, a critical factor in making people commit atrocities is a leader assuring them that they are acting in the service of a higher moral cause with which they identify — for instance, scientific progress or prison reform. We have been taught that guards abused prisoners in the Stanford prison experiment because of the power of their roles, but Haslam and Reicher argue that their behavior arose instead from their identification with the experimenters, which Jaffe and Zimbardo encouraged at every turn. Eshelman, who described himself on an intake questionnaire as a “scientist at heart,” may have identified more powerfully than anyone, but Jaffe himself put it well in his self-evaluation: “I am startled by the ease with which I could turn off my sensitivity and concern for others for ‘a good cause.’”

Finally, here’s the real issue that comes up whenever the so-called Replication Crisis is brought up: careers get built on half-baked if not out and out dishonest ‘studies’ done to promote, in some order, a particular political agenda and the researcher’s career. Those screaming loudest about the evil, evil people trying and failing to replicate their studies are exactly those people who have ridden the fame of such flawed and dishonest studies to prominence and tenure.

Because that’s the way it works in the soft ‘sciences’.

The Stanford prison experiment established Zimbardo as perhaps the most prominent living American psychologist. He became the primary author of one of the field’s most popular and long-running textbooks, Psychology: Core Concepts, and the host of a 1990 PBS video series, Discovering Psychology, which gained wide usage in high school and college classes and is still screened today. Both featured the Stanford prison experiment.

  1. What is the natural environment for elite psychology students? Smoking dope on daddy’s yacht? That would indeed be pretty mellow. Meow.

Friday Roundup

Contrary to the above title, I will not be deploying a chemical solution to the weeds of the Internet, but rather plucking a few flowers:

First, David Warren talks about the history of health and medical care. He is the son of a mother who spent her career in healthcare, and so has perhaps a different perspective than most. Much of what he says is news to me: that the adoption of anesthetics, which make life easier on the doctor, was all but instantaneous across the Western world once demonstrated in Boston in 1846, but that the effectiveness of sterile surgery in preventing infections and increasing patient survival chances was demonstrated in 1865 in Glasgow – but was not universally adopted until well into the 20th century – soldiers by the thousands were dying of easily preventable infections in WWI, half a century after it had demonstrated that such deaths were easily preventable. If you sedate patients, you can more easily perform surgery and people will more easily submit to it (that anyone submitted to significant surgery without being anesthetized is frankly amazing) – and that’s good for business. But someone with a bullet wound is in no condition to complain about filthy conditions in your operating room. He’ll be in no condition to complain about much of anything soon enough, most likely.

His point is that what history shows is that doctors are all over changes that make their lives easier, and less ready to adopt changes that impose work on them – while simple enough, it is a lot of work to keep things sterile – even when it benefits the patients greatly. I think it’s fair to say Warren isn’t picking on doctors uniquely here, but rather pointing out how things work among us fallen people.

Image result for surgery
Hardworking, dedicated people. But people, nonetheless, subject to the same foibles and temptations as anyone else.

One of his main points is also one of my main points, made occasionally here: you want to improve people’s health? Sanitation and clean water get you most of the benefits. The formula I usually use: Sanitation, clean water, plenty of calories on a regular basis and political stability.  These cover by far the lion’s share of the improvement in life expectancy that the modern world has experienced.

Then, for medicine, it’s the cheap basic stuff that provides almost all of the benefits: antibiotics, vaccines, and a sterile environment for medical care. Now, I am personally very grateful for some of the more fancy stuff – blood pressure meds, various straight-forward surgeries, and – very big one, this – modern dental care (people died from infections around impacted teeth and dental abscesses, or had their health greatly compromised).

I’ve now made it to 60, which means, historically, I’m playing with house money from here on out.  Life expectancy for an American male in 1900 was 49. While there are many cases like me, of people whose comparative longevity and vigor have resulted from some more advanced medical care, the overall increase is due almost entirely to simple, cheap, proven practices.

Lots more good stuff in that essay – you’d do better reading it than hanging out here, that’s for sure.

Next, some Feynman quotations, in honor of his 100th birthday (he didn’t make it to the celebration). These are not my personal favorites, except for these:

2. “I think I can safely say that nobody understands quantum mechanics.”

As Feynman said in The Character of Physical Law, many people understand other sophisticated physical theories, including Einstein’s relativity. But quantum mechanics resists an equivalent depth of understanding. Some disagree, proclaiming that they understand quantum mechanics perfectly well. But their understanding disagrees with the supposed understanding of others, equally knowledgeable. Perhaps Feynman’s sentiment might better be expressed by saying that anyone who claims to understand quantum mechanics, doesn’t.

I find this comforting, somehow, as I certainly don’t understand it. Also, it illustrates something that would fall out from the assumption that the mind – oops, the brain, which is assumed to be the mind and not merely the organ of the mind – resulted from Natural Selection: we would not expect anything in nature that falls outside the realm of things that affect our survival chances to be understandable by a brain designed by exactly those things which affect our survival chances. Quantum mechanics cannot have had any role in our ancestors’ environment of evolutionary adaptation. They did not shape spears or hunt warthogs better based on their understanding of Heisenberg, with those with a better understanding somehow, all other things being equal, killing more warthogs.

The more interesting question: how is it, under Natural Selection, that anybody cares about quantum mechanics at all? Or about any of the other millions of things humans have been interested in, obsessed over, even, that have no effect on our survival chances? I think the claim that, somehow, understanding quantum mechanics, however imperfectly, or art or music or philosophy and so on falls out from evolutionary theory should be accompanied by evidence that masters of such fields have comparatively many and vigorous children. Otherwise, it is a just so story in the face of contrary evidence.

Natural Selection, while beautiful in its way, cannot be the whole story.

1. “The first principle is that you must not fool yourself — and you are the easiest person to fool.”

The best Feynman quote of all (from a 1974 address), and the best advice to scientists and anybody else who seeks the truth about the world. The truth may not be what you’d like it to be, or what would be best for you, or what your preconceived philosophy tells you that it is. Unless you recognize how easily you can be fooled, you will be.

This idea is unknown to most people, it seems, and applied to others first by most who do know it – that guy over there is fooling himself. Rare is the man who consistently applies this to himself first. I aspire to this, but it is hard.

Next, on Twitter, which I use to publicize posts here and follow writers, Catholics, scientists and the various combinations of those traits the real world provides, I’m conducting a whimsical experiment. 20 years ago, I wrote a bunch of jokes for a defunct humor list. No money involved, mostly just the honor of amusing the other writers. While obviously I’m not great at it – I’m working a desk job, and have been for almost 40 years now – I did make the honorary Hall of Fame and have a small pile of fan letters from readers. So, no great shakes, but not totally worthless either.

So I’ve taken to posting a recycled joke or 2 each day, to see what happens. Note: I think I understand Twitter about as well as I understand quantum mechanics. ‘Impressions’ is twitterese for eyeballs (well, eyeballs divided by roughly 2) in front of which pass your tweet. I have a bit over 200 ‘followers’ who, if they all checked their feed every day, should result in about 200 ‘impressions’ per tweet as a baseline.

Some followers are just people trying to sell stuff – they come and go, and at any rate are not checking their twitter feeds every day. The reality is that I’ll get around 100 ‘impressions’ for just some random tweet.  But if people ‘like’ or better ‘retweet’ a tweet of mine, I’ll get well over 100, since the tweet is now exposed to the eyeballs on the likers or retweeters tweets.

Clear?

Once, I tweeted some insults at Carl Sagan, Bill Nye and that Tyson fellow. These resulted in over 100,000 impressions a day for a few days. People took sides, tempers flared, people insulted me back. Good times. Otherwise, I’ll post from 0 to 2 tweets and get 250 impressions on a typical day.  Days go by without me tweeting or even looking. I really have no idea what to make of Twitter.

Anyway, back to my research. What I’ve discovered: using ‘impressions’ as a surrogate for ‘funny’, I have no idea what other people think is funny.  What I mean: if people ‘like’ and retweet a joke, more eyeballs hit it. If they don’t, fewer. So the number of eyeballs could be a measure of how funny a joke is. Maybe. It’s a stretch, but – maybe.

Back when I was writing these jokes, this one, that I tossed off as a ‘meh’ joke, was my most admired quip based on largely anecdotal evidence, for reasons that escape me:

People often wonder what it is that makes the Beatles so great. I think it is probably their songs.

That got a fairly decent response on Twitter. I have no idea why. I’ve only been doing this for a week, but so far, my ‘best’ joke by far based on Twitter metrics is:

The tiny fish gets eaten by the little fish, which gets eaten by the big fish, which gets eaten by the bigger fish, which gets put into a can and fed to my cat.  Personally, I’m holding off marveling at the grandeur of Nature at least until the cat can use the can opener.

Ok, I guess. Over 700 sets of eyeballs have seen this in the couple hours since I threw it up there.

For comparison, here are two of my favorites that still crack me up, in the form of headlines – haven’t tweeted them yet:

Child Development Center Releases Prototype

and

Phlogiston Blamed in Antique Shop Fire

My understanding of humor is, um, idiosyncratic? Is that too nice a way to put it?

 

 

Data

(30 seconds of web searching didn’t uncover one of my favorite cartoons – a solid, no-nonsense business man at his desk, reading a magazine titled “Raw Data”. So you’ll just have to imagine it.)

Two items drifted across my computer screen recently brought to our attention by author and inventor Hans Schantz:

First off, a way-cool map

This is a way-cool map, but brought up a few questions. I responded:

Fascinating. Does this show that Europeans are much better at keeping track of their battles, engage in more formal battles, some combination, or what?

He didn’t know. I didn’t see this information on the map’s web page, but I didn’t really search hard, either. And:

Big picture data collection/validation problems are often ignored. e.g., what’s a battle? 10 guys throwing rock? 20? How about spears? Is a siege a battle? How about an invasion, with little resistance? All those Italian Renaissance ‘battles’ w/ mercenaries & few/no casualties?

A few other considerations: 10 guys throwing rocks might, indeed, be a battle if we’re talking Irish villages of 1,500 years ago or conflicts between hunter-gatherer tribes, while a hundred men with machine guns slugging it out on the periphery of major battle lines might not even qualify as a footnote. Was the bombing of Nagasaki a battle? Why or why not? And so on.

I dig a good map as much as anyone, and admire clever representations of data. But, alas! experience has taught me the sad truth: few, if any, popular maps/data representations are worth the electronic media they are encoded in. This map says, at a glance, that Europeans have many more recorded battles than anybody else. One is sore tempted to think, therefore, Europeans are just that much more violent than other peoples.

Well? Does the map actually say that? We can’t tell without a boatload more information. We do know that, in general, Europeans (and Southwest Asians and Egyptians) were very much into written (and engraved – you get the picture) records than most other cultures at most other times.

Next, this movie of the sun at various wavelengths and enhanced various ways.

Really beautiful stuff, and glorious to see the coronal mass ejections and the magnetic field lines.

But, obviously, we, meaning actual human beings, are not ‘seeing’ any of this directly. All the images of the sun are heavily filtered and enhanced to give us these views. This is true not only here, where simply looking at the sun would blind us, but also for all those glorious Hubble pictures and the fly-by images we get of the planets and other objects in the solar system.

This is a different kind of data problem: we’re trusting that the technicians that worked on this are trying to show us what’s there, in a way. What’s there is, strictly speaking, mostly invisible to us – too bright, too dim, not enough contrast, and so on. I trust the technicians are trying merely to give us the most beautiful and informative pictures they can, mostly because that serves both their mission and their interests. Not so much on the battles map, because it could be used to serve a popular political position. Not saying the map makers are necessarily doing that, just that is very easily could be done. Examples of just such underhanded dishonesty are unfortunately legion.

Data points get made into facts, as Mike Flynn often points out, via the assumptions and theories that surround their collection and presentation. No great landmines in these two examples, but even here it bears keeping in mind.

Bad Numbers. Bad Assertions.

Swamped. Brief notes:

Image result for incredulous face
I have my doubts.

A. Slipped up and listened to the news over the radio on the drive in today. Heard the assertion that the stock market is down due to uncertainty over the China trade situation. Such single causes are routinely proposed for whatever the markets do every day.

I am amazed that people can say stuff like this with a straight face. Thousands if not millions of individuals and institutions make buy and sell decisions on stock exchanges every hour. Many if not most of these trades reflect the workings of more or less sophisticated strategies worked out months or years or lifetimes in advance of any individual event. Even more basic, it’s people making decisions in private.  Fundamentally, that’s what a market is. Buyers buy at what sellers are willing to sell for; sellers sell for what buyers are willing to pay. Yet we accept that there is *a* cause to whatever the market is doing at the moment?

B. Saw a claim that the current administration is evil and stupid for wanting to create a database of social security numbers for all food stamp recipients, to fight double-dipping across state lines, since less than 1% of recipients in fact double dip.

I don’t know anything about this issue, whether it’s big enough to warrant this or any action. I sort of think not. But I have to wonder: lacking precisely the data such a database would collect, how would one come up with that “less than 1%” claim? You send out a bunch of sociology students to hang out at supermarkets asking people paying with food stamps if they double dip? Or what? Seems a totally made up number, that, given the political motivations for believing it, will soon attain to Scriptural levels of certainty. If it hasn’t already.

C. The human capacity to not mentally break in half from the whiplash caused by snapping from one extreme position to its opposite continues to amaze. The current manifestation: the claim that Trump was going to cause WWIII and the concomitant nuclear holocaust by being mean to North Korea has been replaced with nary a pause by the claim that the ending of hostilities in Korea after 70 years is really no big deal (1), dancing in the streets by actual Koreans notwithstanding. These positions seem to be spouted by exactly the same people more often than not.

Um, what? I’m reminded of cult leaders, who keep the loyalty and even love of their followers right up to and past drinking the cool-aide. It seems nothing so mundane as reality can dissuade the True Believers. Me? I share the evident joy of the Koreans, who seem to me to be in the best position to know what’s going on.

  1. The conspiracy theories that have mushroomed up around Trump’s success put fake moon-landing and flat earthers to shame.

Better Living Through Cumulative Engineering

Got these excruciating posts on Deep Topics(tm) that I’m bogging on because  I have a cold and the concomitant even-more-than-usual muddled head. (all together now: “poor baby!”). So: lighter observations:

Lee Iacocca tells the story in his autobiography(1) of his first day at work for Ford.  After getting an engineering degree and a MBA from Harvard, he’s assigned to work on improving the design of a spring. He spent his first day studying a spring used someplace in some Ford vehicle or other, then marched off to request transfer into sales.

What’s striking me today about this story: it is very probable many men spent many hours working on that spring over the years. There were no doubt a set of specs for that spring, such as how big it could be, how long it had to last, how strong and resilient in needed to be to do its job. That was probably a pretty darn good spring. Ford then assigns a highly intelligent, highly trained young man to look at it again.

A generic toilet seat, for illustrative purposes only. Not exactly the model I used.  Like you care.

This story was brought to mind because I, rising from my sickbed (that’s your cue to cry me a river), replaced a toilet seat Tuesday in the front downstairs bathroom. The crummy plastic one that came with the toilets 15 years ago broke a hinge, so I got a slightly less crummy one with metal hinges and a sturdier-looking lid and seat to replace it.

What’s of note here, apart from my manly competence (I even had to use a screwdriver!) is that a crummy plastic toilet seat lasted 15 years in the most-used bathroom in a house of 6-7 people. Not bad, really. Further, the replacement seat used some pretty fancy engineering for the attachment to the bowl. I was impressed.

The spec for toilet seat fasteners includes some fairly stringent requirements, due to the, shall we say, environment in which they are to be deployed. First, they can’t rust. The top of a toilet bowl tends to be a damp, corrosive place. Second, since the bowl is porcelain, the fasteners must hold tight but not too tight, or they will crack the bowl. Third, they must be cheap. Nobody is going NASA-level on toilet seat fasteners.

The traditional approach, at least in my very small experience, is to use brass screws (don’t corrode like steel) and rubber or plastic gaskets and nuts, which will not permit overtightening. The nuts will break first. The nuts are winged, so that, when tightened from above, the wings will contact the underside of the bowl enough not to turn – handy.

And it works – OK. The cheap, 15 year old seat fasteners require regular tightening. This need was evidently anticipated by the engineers, who put slotted bolt heads under little plastic flaps at the back of the seat, so that they can be easily retightened using a screwdriver or a dime (my preferred method, although I’m compelled to wash the dime before putting it back in my pocket).

This new seat, which I’m guessing is heavy duty hardboard with a thick plastic coating, came with something else: stainless steel threaded rods, a force-fitted plastic collar for the top and a long plastic ‘nut’ with an hexagonal cross-section for the bottom. The rods screwed into the hinges and the collars made sure the rods didn’t make contact with the bowl. I’m thinking the engineers thought this separation would reduce the risk of both cracking the bowl and corroding the rod.

The nuts, which are a bit like short straws only threaded in the middle, could only be tightened from below. Too early to tell if this was part of the plan assuming the engineers had solved the loosening over time issues, or just valuing aesthetics over ease. The engineering coolness: the nuts have an hexagonal nub on the end that, according to the instructions, is supposed to break off once you’ve gotten the nuts to the proper tightness, solving the overtightening issue. (A determined monkey could grab the remainder of the nut and keep turning, but that would be stupid.)

This tech might be 100 years old, for all I know. That’s not the point. Somebody still had to notice it, and apply it to something as utterly mundane and as lacking in glamour as toilet seats.

The point of all this: we live in a time and place where real engineers spend real time on issues as utterly trivial as springs in cars and fasteners on toilet seats, and have done so now for generations. There are hardly any aspects of our physical lives that have not been touched and improved by some unknown engineers somewhere improving this or that gadget or tool. Our cars are safer, last longer and are easier and more fun to drive. Our utilities just work. The lines painted on our roads last longer and reflect the light. We guys can get a good shave without committing facial seppuku even when half-asleep. And so on.  All these little changes have made life easier and more pleasant, overall.

While we are perhaps more aware of stupid innovations that fail to make life easier (*cough* Microsoft *cough*), it would be good to also notice all the little improvements that are so easy to miss because they Just Work. Science gets all the attention, and it is indispensable. But without all the endless mundane engineering, science would just be pie in the sky dreaming.

So, cool, and thanks to the unknown army of engineers. The toilet seat failure reminds me, however, that I now have to face replacing pretty much all the appliances we bought 15 years ago, as they march right past their use by dates and start falling apart. Guessing 10 years was probably the engineering target. There’s a dark side to almost everything.

(Would also mention that this cumulative engineering is the sort of thing a free market does well and a managed market very poorly or not at all, but this is a happy occasion! Let’s not bicker about ‘o killed ‘o!)

  1. Yes, I’ve read Iacocca’s autobiography. Hey, I was young and foolish and stuck some place that had a copy of it on the shelf. I probably am not remembering it right. So sue me.