AI, AI, Oh.

Old MacBezos Had a Server Farm…

Free-associating there, a little. Pardon me.

Seems AI is on a lot of people’s minds these days. I, along with many, have my doubts:

My opinion: there are a lot of physical processes well suited to the very fancy automation that today is called AI. Such AI could put most underwriters, investment analysts, and hardware designers out of a job, like telegraph agents and buggy whip makers before them. I also think there’s an awful lot of the ‘we’re almost there!’ noise surrounding AI that has surrounded commercial nuclear fusion for my entire life – it’s always just around the corner, it’s always just a few technical details that need working out.

But it’s still not here. Both commercial nuclear fusion and AI, in the manner I am talking about, may come, and may even come soon. But I’m not holding my breath.

And this is not the sort of strong AI – you know, the Commander Data kind of AI – that gets human rights for robots discussions going. For philosophical reasons, I have my doubts human beings can create intellect (other than in the old fashioned baby-making way), no matter how much emergent properties handwavium is applied. Onward:

Here is the esteemed William Briggs, Statistician to the Stars, taking a shot at the “burgeoning digital afterlife industry”. Some geniuses have decided to one-up the standard Las Vegas psychic lounge routine, where by a combination of research (“hot readings”) and clever dialogue (“cold readings”), a performer can give the gullible the impression he is a mind reader, by training computers to do it.

Hot readings are cheating. Cons peek in wallets, purses, and now on the Internet, and note relevant facts, such as addresses, birthdays, and various other bits of personal information. Cold readings are when the con probes the mark, trying many different lines of inquiry—“I see the letter ‘M’”—which rely on the mark providing relevant feedback. “I had a pet duck when I was four named Missy?” “That’s it! Missy misses you from Duck Heaven.” “You can see!”

You might not believe it, but cold reading is shockingly effective. I have used it many times in practicing mentalism (mental magic), all under the guise of “scientific psychological theory.” People want to believe in psychics, and they want to believe in science maybe even more.

Briggs notes that this is a form of the Turing Test, and points to a wonderful 1990 interview of Mortimer Adler by William F. Buckley, wherein they discuss the notions of intellect,. brain, and human thought. Well worth the 10 minutes to watch.

In Machine Learning Disability, esteemed writer and theologian Brian Niemeier recounts, first, a story much like I reference in my tweet pasted in above: how a algorithm trained to do one thing – identify hit songs across many media in near real time – generates an hilarious false positive when an old pirated and memed clip goes viral.

Then it gets all serious. All this Big Data science you’ve been hearing of, and upon which the Google, Facebook and Amazon fortunes are built, is very, very iffy, no better than the Billboard algorithms that generated the false positive. Less obvious are people now using Big Data science to prove all sorts of things. In my gimlet-eyed take, doing research on giant datasets is a great way to bury your assumptions and biases so that they’re very hard to find. This, on top of the errors built in to the sampling, the methodology and algorithms themselves – errors upon errors upon errors.

As Niemeier points out, just having huge amounts of data is no guarantee you are doing good science, in in fact multiplies to opportunity to get it wrong. Briggs points out in his essay how easily people are fooled, and how doggedly they’ll stick to their beliefs even in the face of contrary evidence. You put these things together, and it’s pretty scary out there.

I’m always amazed that people who have worked around computers fall for any of this. Every geek with a shred of self-awareness (not a given by any means) has multiple stories about programs and hardware doing stupid things, how no one could have possibly imagined a user doing X, and so (best case) X crashes the system or (worse case) X propagates and goes unnoticed for years until the error is subtle, ingrained and permanent. Depending on the error, this could be bad. Big Data is a perfect environment for this latter result.

John C. Wright also gets in on the AI kerfuffle, referencing the Briggs post and adding his own inimitable comments.

Finally, Dust, a Youtube channel featuring science fiction short films, recently had an “AI Week” where the shorts were all based on AI themes. One film took a machine learning tool, fed it a bunch of Sci Fi classics and not so classics, and had it write a script, following the procedure used by short film competitions. And then shot the film. The results are always painful, but occasionally painfully funny. The actors should get Oscar nominations in the new Lucas Memorial Best Straight Faces When Saying Really Stupid Dialogue category:

Advertisements

Wet Enough for You? Philip Marlowe Edition

From the L.A. Times: Why L.A. is having such a wet winter after years of drought conditions. (Warning: they’ll let you look at their site for a while, then cut you off like a barkeep when closing time approaches.) Haven’t looked at the article yet, but I’ll fall off my chair if the answer doesn’t contain global warming/climate change.

But I have some ideas of my own. Historical data on seasonal rainfall totals for Los Angeles over the last 140+ year is readily available on the web. I took that data, and did a little light analysis.

Average seasonal rainfall in L.A. is 14.07″. 60% of the time, rainfall is below average; 40% above. Percentage of seasons with:

  • less than 75% of average rain: 32.62
  • between 75% and 125%: 39.01
  • over 75%: 28.37

“Normal” rainfall covers a pretty wide range, one would reasonably suppose. Getting a lot or a little seems somewhat more likely than getting somewhere around average. This fits with my experience growing up in L.A. (18 year sample size, use with caution.)

The last 20 years look like:

Season (July 1-June 30)Total Rainfall, InchesVariance from Avg
2017-20184.79-9.91
2016-201719.004.3
2015-20169.65-5.05
2014-20158.52-6.18
2013-20146.08-8.62
2012-20135.85-8.85
2011-20128.69-6.01
2010-201120.205.5
2009-201016.361.66
2008-20099.08-5.62
2007-200813.53-1.17
2006-20073.21-11.49
2005-200613.19-1.51
2004-200537.2522.55
2003-20049.24-5.46
2002-200316.491.79
2001-20024.42-10.28
2000-200117.943.24
1999-200011.57-3.13
1998-19999.09-5.61

14 years out of 20 (70%) are under average; 6 above. Those 5 years in a row stand out, as does the 9 out of 11 years under from 2005-2006 to 2015-2016. (That 22.55 inches in 2004-2005 also stands out – very wet year by L.A. standards.)

Wow, that does look bad. So does this stretch, with 7 out of 8 under:

1924-19257.38-7.32
1923-19246.67-8.03
1922-19239.59-5.11
1921-192219.664.96
1920-192113.71-0.99
1919-192012.52-2.18
1918-19198.58-6.12
1917-191813.86-0.84

And this one, with 10 out of 11:

1954-195511.94-2.76
1953-195411.99-2.71
1952-19539.46-5.24
1951-195226.2111.51
1950-19518.21-6.49
1949-195010.60-4.1
1948-19497.99-6.71
1947-19487.22-7.48
1946-194712.61-2.09
1945-194612.13-2.57
1944-194511.58-3.12

Or this, with 6 out of 7:

1964-196513.69-1.01
1963-19647.93-6.77
1962-19638.38-6.32
1961-196218.794.09
1960-19614.85-9.85
1959-19608.18-6.52
1958-19595.58-9.12

This last cherry-picked selection is also like the most recent years in that annual rainfall is not just under, but way under. This last sample shows more than 6″ under, in 5 out of 6 years. In the recent sample, 5 out of the last 7 years prior to this year were more than 6″ under, and one over 5″ under.

How often does L.A. get rainfall 6″ or more under average? About 22% of the time. So, hardly unusual, and, given a big enough sample (evidently not very big), you would expect to find the sorts of patterns we see here, even if, as it would be foolish to assume, every year’s rainfall is a completely independent event from the preceding year or years. It would make at least as much sense to think there are big, multi-year, multi-decade, multi-century and so on cycles – cycles that would take much larger samples of seasonal rainfall to detect. And those cycles could very well interact – cycles within cycles.

Problem is, I’ve got 141 years of data, so I can’t say. I suspect nobody can. Given the poorly understood cycles in the oceans and sun, and the effect of the moon on the oceans and atmosphere, which it would be reasonable to assume affect weather and rainfall, we’re far from discovering the causes of the little patterns cherry picking the data might present to us. They only tell us that rainfall seems to fall into patterns, where one dry year is often followed by one or two or even four or five more dry years. And sometimes not.

L.A. also gets stretches such as this:

1943-194419.214.51
1942-194319.174.47
1941-194211.18-3.52
1940-194132.7618.06
1939-194018.964.26
1938-193913.06-1.64
1937-193823.438.73
1936-193722.417.71
1935-193612.07-2.63
1934-193521.666.96

Not only are 7 out of 10 years wetter than average, the 3 years under average are only a little short. This would help explain why it is so often raining in Raymond Chandler stories set in L.A. – this sample of years overlaps most of his masterpieces.

Image result for philip marlowe
It could be raining outside – hard to tell, and I don’t remember. Just work with me here, OK?

The L.A. Times sees something in this data-based Rorschach test; I see nothing much. Let’s see what the article says:

Nothing. The headline writer, editor and writer evidently don’t talk to each other, as the article as published makes no attempt to answer or even address the question implied in the headline. It’s just a glorified weather report cobbled together from interviews from over the last several months. Conclusion: things seem OK, water system wise, for now, but keep some panic on slow simmer, just in case. Something like that.

Oh, well. You win some, you lose some. That *thunk* you hear is me falling out of my chair.

A Cultivated Mind

Just kidding! I think!

Here I wrote about how I’m trying to help this admirably curious young man for whom I am RCIA sponsor on his intellectual journey. I’m no Socrates, but I do know a thing or two that this young man is not going to pick up at school, that would be helpful to him and, frankly, to the world. Any efforts to get a little educated and shine a little light into the surrounding darkness seems a good thing to me.

I figure I’ll give him a single page every week or so when I see him, with the offer to talk it over whenever he’s available. Below is the content of the second page; you can see the first in the post linked above. We started off with a description of Truth and Knowledge. I figure the idea of a cultivated mind might be good next. We’ll wrap it up with a page on the Good and one on the Beautiful, and see where it goes from there.

Any thoughts/corrections appreciated.

A Cultivated Mind

A cultivated mind can consider an idea without accepting it.

What is meant by a “cultivated mind”?

Like a cultivated field:

  • Meant for things to be planted and grown in it
  • Weeded of bad habits and bad ideas
  • Is cared for daily

A cultivated mind

  • is what a civilized and educated man strives to have.
  • is not snobby or elitist.
  • Is what is required to honestly face the world.
  • Is open to new ideas, but considers them rationally before accepting them.

How do you cultivate your mind?

Reexamine the ideas you find most attractive:

  • Have you accepted them because you like them, or because you examined them and believe them true?

Carefully review all popular ideas:

  • Have you accepted them because to reject them might make you unpopular?
  • Have you really examined them before accepting them?

Double your efforts to be fair when considering ideas you do not like:

  • Can you restate the idea in terms that people who accept it would recognize and agree with? If not, you are not able to truly consider the idea.

NOTE 1: To engage ideas, listen to and read what people who hold those ideas say, especially when you don’t like them or already disagree. Hear and understand what the idea really is before you can consider it.This takes discipline and time.

NOTE 2: This is a life-long project, always subject to revision. Guard against over certainty, avoid exaggeration. Do not pretend to know what you do not know. Acknowledge that some things are difficult, and can only be known partially.

Follow the Dominican maxim: “Seldom affirm, never deny, always distinguish.”

Image result for monsters vs aliens B.O.B totally overrated

“Forgive him, but as you can see, he has no brain.” “Turns out you don’t need one. Totally overrated!”

 


How’s the Weather? 2018/2019 Update

In a recent post here you could almost hear the disappointment in the climate scientists’ words as they recounted the terrible truth: that, despite what the models were saying would happen, snowpack in the mountains of the western U.S. had not declined at all over the last 35 years. This got me thinking about the weather, as weather over time equals climate. So I looked into the history of the Sierra snowpack. Interesting stuff.

From a September 2015 article from the LA Times

This chart accompanies a September 14th, 2015 article in the LA Times: Sierra Nevada snowpack is much worse than thought: a 500-year low.

When California Gov. Jerry Brown stood in a snowless Sierra Nevada meadow on April 1 and ordered unprecedented drought restrictions, it was the first time in 75 years that the area had lacked any sign of spring snow.

Now researchers say this year’s record-low snowpack may be far more historic — and ominous — than previously realized.

A couple of commendable things stand out from this chart, and I would like to commend them: first, it is a very pleasant surprise to see the data sources acknowledged. From 1930 on, people took direct measurements of the snowpack. The way they do it today is two-fold: sticking a long, hollow calibrated pole into the snow until they hit dirt. They can simply read the numbers off the side of the pole to see how deep it is. The snow tends to stick inside the pole, which they can then weigh to see how much water is in the snow. They take these measurements in the same places on the same dates over the years, to get as close to an apples to apples comparison as they can. Very elegant and scientifilicious.

They also have many automated station that measure such things in a fancy automatic way. I assume they did it the first way back in 1930, and added the fancy way over time as the tech become available. Either way, we’re looking at actual snow more or less directly.

Today’s results from the automated system. From the California Data Exchange System.

Prior to 1930, there were no standard way of doing this, and I’d suppose, prior to the early 1800s at the earliest, nobody really thought much about doing it. Instead, modern researchers looked at tree rings to get a ballpark idea.

I have some confidence in their proxy method simply because it passes the eye test: in that first chart, the patterns and extremes in the proxies look pretty much exactly like the patterns and extremes measured more directly over the past 85 years. But that’s just a gut feel, maybe there’s some unconscious forcing going on, some understatement of uncertainty, or some other factors making the pre-1930 estimates less like the post 1930 measurements. But it’s good solid science to own up to the different nature of the numbers. We’re not doing an apples to apples comparison, even if it looks pretty reasonable.

The second thing to commend the Times on: they included this chart, even though it in fact does not support the panic mongering in the headline. It would have been very easy to leave it out, and avoid the admittedly small chance readers might notice that, while the claim that the 2015 snowpack was the lowest in 500 might conceivably be true, having a similar very low snowpack has been a pretty regular occurrence over that same 500 years. Further, they might notice those very low years have been soon followed by some really high years, without exception.

Ominous, we are told. What did happen? 2015-2016 snowpack was around the average, 2016-2017 was near record deep, 2017-2018 also around average. So far, the 2018-2019 season, as the chart from the automatic system shows, is at 128% of season to date average. What the chart doesn’t show: a huge storm is rolling in later this week, forecast to drop 5 to 8 feet of additional snow. This should put us well above the April 1 average, which date is around the usual maximum snowpack date, with 7 more weeks to go. Even without additional snow, this will be a good year. If we get a few more storm between now and April 1, it could be a very good year.

And I will predict, with high confidence, that, over the next 10 years, we’ll have one or two or maybe even 3 years well below average. Because, lacking a cause to change it, that’s been the pattern for centuries.

Just as the climate researchers mentioned in the previous post were disappointed Nature failed to comply with their models, the panic mongering of the Times 3.5 years ago has also proven inaccurate. In both cases, without even looking it up, we know what kind of answer we will be given: this is an inexplicable aberration! It will get hotter and dryer! Eventually! Or it won’t, for reasons, none of which shall entail admitting our models are wrong.

It’s a truism in weather forecasting that simply predicting tomorrow’s weather will be a lot like today’s is a really accurate method. If those researchers from the last post and the Times had simply looked at their own data and predicted future snowpacks would be a lot like past ones, they’d have been pretty accurate, too.

Still waiting for the next mega-storm season, like 1861-1862. I should hope it never happens, as it would wipe out much of California’s water infrastructure and flood out millions of people. But, if it’s going to happen anyway, I’d just as soon get to see it. Or is that too morbid?

K Street, Inundation of the State Capitol, City of Sacramento, 1862.jpg
Great Flood of 1862. Via Wikipedia.

Feser and the Galileo Trap

File:Bertini fresco of Galileo Galilei and Doge of Venice.jpg
Galileo showing the Doge of Venice how to use a telescope.

Edward Feser here tackles the irrationality on daily display via the Covington Catholic affair, and references a more detailed description of skepticism gone crazy:

As I have argued elsewhere, the attraction of political narratives that posit vast unseen conspiracies derives in part from the general tendency in modern intellectual life reflexively to suppose that “nothing is at it seems,” that reality is radically different from or even contrary to what common sense supposes it to be.  This is a misinterpretation and overgeneralization of certain cases in the history of modern science where common sense turned out to be wrong, and when applied to moral and social issues it yields variations on the “hermeneutics of suspicion” associated with thinkers like Nietzsche and Marx.  

Readers of this blog may recognize in Feser discussion above what I refer to as the Galileo Trap: the tendency or perhaps pathology that rejects all common experiences to embrace complex, difficult explanations that contradict them. In Galileo’s case, it happens that all common experiences tell you the world is stationary. Sure does not look or feel like we are moving at all. That the planet “really” is spinning at 1,000 miles an hour and whipping through space even faster proves, somehow, that all those gullible rubes relying on their lying eyes are wrong! Similar situations arise with relativity and motion in general, where the accepted science does not square with simple understanding based on common experience.

Historically, science sometimes presents explanations that, by accurately accommodating more esoteric observations, make common observations much more complicated to understand. Galileo notably failed to explain how life on the surface of a spinning globe spiraling through space could appear so bucolic. By offering a more elegant explanation of the motion of other planets, he made understanding the apparent and easily observed immobility of this one something requiring a complex account. But Galileo proved to be (more or less) correct; over the course of the next couple centuries, theories were developed and accepted that accounted for the apparent discrepancies between common appearance and reality.

We see an arrow arch through the air, slow, and fall; we see a feather fall more slowly than a rock. Somehow, we think Aristotle was stupid for failing to discover and apply Newton’s laws. While they wonderfully explain the extraordinarily difficult to see motion of the planets, they also require the introduction of a number of other factors to explain a falling leaf you can see out the kitchen window.

Thus, because in few critical areas of hard science – or, as we say here, simply science – useful, elegant and more general explanations sometimes make common experiences harder to understand, it has become common to believe it is a feature of the universe that what’s *really* going on contradicts any simple understanding. Rather than the default position being ‘stick with the simple explanation unless forced by evidence to move off it,’ the general attitude seems to be the real explanation is always hidden and contradicts appearances. This boils down to the belief we cannot trust any common, simple, direct explanations. We cannot trust tradition or authority, which tend to formulate and pass on common sense explanations, even and especially in science!

Such pessimism, as Feser calls it, is bad enough in science. It is the disaster he describes in politics and culture. Simply, it matters if you expect hidden, subtle explanations and reject common experience. You become an easy mark for conspiracy theories.

I’ve commented here on how Hegel classifies the world into enlightened people who agree with him, and the ignorant, unwashed masses who don’t. He establishes, in other words, a cool kid’s club. Oh sure, some of the little people need logic and math and other such crutches, but the pure speculative philosophers epitomized by Hegel have transcended such weakness. Marx and Freud make effusive and near-exclusive use of this approach as well. Today’s ‘woke’ population is this same idea mass-produced for general consumption.

Since at least Luther in the West, the rhetorical tool of accusing your opponent of being unenlightened, evil or both in lieu of addressing the argument itself has come to dominate public discourse.

A clue to the real attraction of conspiracy theories, I would suggest, lies in the rhetoric of theorists themselves, which is filled with self-congratulatory descriptions of those who accept such theories as “willing to think,” “educated,” “independent-minded,” and so forth, and with invective against the “uninformed” and “unthinking” “sheeple” who “blindly follow authority.” The world of the conspiracy theorist is Manichean: either you are intelligent, well-informed, and honest, and therefore question all authority and received opinion; or you accept what popular opinion or an authority says and therefore must be stupid, dishonest, and ignorant. There is no third option.

Feser traces the roots:

Crude as this dichotomy is, anyone familiar with the intellectual and cultural history of the last several hundred years might hear in it at least an echo of the rhetoric of the Enlightenment, and of much of the philosophical and political thought that has followed in its wake. The core of the Enlightenment narrative – you might call it the “official story” – is that the Western world languished for centuries in a superstitious and authoritarian darkness, in thrall to a corrupt and power-hungry Church which stifled free inquiry. Then came Science, whose brave practitioners “spoke truth to power,” liberating us from the dead hand of ecclesiastical authority and exposing the falsity of its outmoded dogmas. Ever since, all has been progress, freedom, smiles and good cheer.

If being enlightened, having raised one’s consciousness or being woke meant anything positive, it would mean coming to grips with the appalling stupidity of the “official story”. It’s also amusing that science itself is under attack. It’s a social construct of the hegemony, used to oppress us, you see. Thus the snake eats its tail: this radical skepticism owes its appeal to the rare valid cases where science showed common experiences misleading, and yet now it attacks the science which is its only non-neurotic basis.

Science! Strikes Again: Saving the Theory from the Data

An amusing headline: For 35 years, the Pacific Ocean has largely spared West’s mountain snow from effects of global warming. A “study” by “scientists” is used to explain what, in a saner world, might simply be stated as follows: “Western mountain snowpack shows no evidence of global warming over the the last 35 years.”

In the article, we learn that models predict that snowpack in Washington’s portion of the Cascade Range, for example, should have fallen by 2% – 44% over the the last 35 years, but in fact have shown no significant decline. Now a crass, narrow-minded person, clearly not in the cool kids club, might leap to the conclusion that the data here contradicts the model, therefore – you’re sitting down, right? – the model is wrong. The whole purpose and entire source of validation for a model is predictions. You build a model hoping to capture some aspect of the real world. You use this model to make concrete, measurable predictions that can be checked against the real world, to see if your model is useful. If the facts don’t match the predictions, you throw out the model and start over. This is called science.

Here, instead, the study invokes a cause not in the model. We know this cause was not in the model since, if it were in the model, the model would have presumably produced useful predictions.

“There were a lot of discussions within the department of the surprising stability of the western U.S. snowpack, because it went against the predictions,” said co-author Cristian Proistosescu, a postdoctoral researcher at the UW’s Joint Institute for the Study of the Atmosphere and Ocean.

The discussion did not, evidently, include the obvious conclusion required by basic science: our model is wrong. Nope, this inescapable conclusion is masked behind an appeal to additional causes. Natural variations in the Pacific Ocean kept the snowpack stable, it is asserted.

Stop right here: if your model needs to appeal to factors outside itself, factors not built into the model, that means your model is wrong. Call it incomplete if you want, but the short, English word for that state where the model does not provide useful, validated predictions is ‘wrong’. Throw it out. Build a new model that includes the newly-discovered (!) causes, if you want, make some more predictions, and see what happens. But clinging to a model that’s been proven wrong by real world data is pathetic, and patiently anti-science.

It’s not just the Western U.S. mountains that fail to validate those models. It’s not like the hundreds of different climate models floating around have some sort of sterling track record otherwise, so that we’d lose predictive power if we just tossed them all. No, they all predict that the earth would be much warmer now than it actually is. The Arctic would be ice free by 2000 2013 2016 2050. (Pro-tip: always make your predictions take place out beyond your funding cycle, to mitigate the slim chance people will remember you made them by the time the next grant proposal needs filing.)

A slightly – very slightly – more subtle point: we all know there’s such a thing as ‘natural variations’ in all sorts of areas. In practice, especially when building models, natural variations are nothing more than a collective name for causes we don’t understand well enough to build into the model. Even admitting the existence of natural variations that affect the thing being modeled that are not included in the model is to admit the model is at best incomplete.

One might leave out potential causes on the assumption that, while they might theoretically affect predictions, in practice they are not material. When we say acceleration under gravity at the earth’s surface is 32’sec^2, we leave out air resistance (and air pressure variations, and humidity, and no doubt a bunch of other things) because that formula has proven to be useful quite a bit of the time. Only in very fussy situations do we need something else, as long as we’re testing near the earth’s surface.

We know we can ignore some complexities only because we used our model to predict outcomes, measured those outcomes and found them good enough. To admit there are natural variations that render a model’s predictions useless is to admit that the phenomenon being modeled is beyond our skill as modelers. No amount of statistical sleight of hand can make this go away.

Another issue is the baseline question: this study considers 35 years of data. With few exceptions, the mountains of the Western U.S. have been there, experiencing snowpack and natural variations, for at least several hundred thousand times that long. This data covers something less than 0.00001% of the potential dataset.

Well? Is that enough? Can we justify any conclusions drawn from such a tiny sample? Can we say with any confidence what ‘average’ or ‘normal’ conditions are for snowpack in these mountains? The natural variations we know about include ice ages, glaciers and glacial lakes. Precipitation levels almost certainly vary wildly over thousands, let alone millions, of years. On what basis should we conclude that the snowpack should stay the same, grow, shrink or do anything in particular over a given 35 year period?

Enough. The monotonous similarity of these sorts of “studies” in their steadfast resistance to apply even a little basic science or common sense to their analyses tires me.

Life, the Universe, and Everything: A User’s Guide

That title is a wee bit over the top. A bit. Here’s the real deal: I am the RCIA sponsor this year to a very bright young man, 16, who asks a lot of good questions and really seems to want to understand things. But he, alas, is a product of the schools, and therefore has systematically been denied any whiff of real education.

So, I thought to myself, I did, that maybe I could hook him on some basic logic and philosophy and steal him from the clutches of those who would dumb him down and control him. I could feed him just a bit of real, honest thought. Seemed like a plan. But given the realities of modern ‘education’, I should keep it real short.

A seriously furrowed brow. There just must be some serious thinking going on in there, right?

Here it is: a one page outline of Truth. What do you, my esteemed readers, think?

An Introduction to Truth, Facts, and Reasoning

Truth: A man is said to have the truth when his understanding corresponds to reality.

Necessary Truths: Those things which must be true if anything is true. Or, put another way, those things that must be true if you know anything at all about reality. Necessary truths do not depend on anything in particular you see, hear, feel, smell, etc., but rather must be true IF you see, hear, feel, smell or touch ANYTHING AT ALL.

The study of Necessary Truths is called metaphysics. (Today, the term metaphysics is applied to all sorts of stupid ideas, but this is what it means when used correctly.)

Necessary truths include:

  • An objective world exists. We call this world ‘reality’.
  • Truth exists. We can understand reality, at least some parts of it, at least a little.
  • The law of noncontradiction: A thing cannot both be and not be in the same way at the same time.
  • All the other rules of logic. We use those rules to understand the rest of reality, but the rest of reality doesn’t help us in any way to understand those rules.
  • The rules of math. Same as the rules of logic.

Conditional or Contingent Truth: Truth that depends on conditions or assumptions. Conditional truths all take for granted the necessary truths. You can’t have any conditional truths without the necessary truths.

Conditional truths are very important. Almost everything we know are conditional truths.

Facts: Units of conditional truth created when the necessarily true rules of reasoning are validly applied to observations.

Conditional truths include:

  • All science. All science begins with observations and measurements, which are conditional because we can get them wrong. Science applies the rules of logic and math, which are necessarily true, to those observations and measurements to create scientific facts.
  • All theology. Because it includes revelation and observation!
  • All philosophy besides metaphysics.

Informed Opinion: A kind of conditional knowledge that has not been thought through completely, such as what a good craftsman knows about his craft. He hasn’t worked through all the logic or examined all the assumptions, but he ‘knows’ what works.