I only understand maybe 50% (and that may be optimistic) of the esteemed William Briggs’ latest post, but must share: Quantum Potency & Probability.
Here’s my take on the issue: I’ve heard most of my life about how, at a quantum level, reality is probabilistic. What this seems to mean to people propounding it is that reality, viewed on a fine enough level, is not governed by the laws of cause and effect, nor even by the law of noncontradiction. Things can come into being and pass out of being for no reason; and some things can truly be said to both be and not be at the same time in the same way.
To be fair, it’s not often put exactly like that, but it sometimes seems to be. ( As is almost always the case, the better the scientist, the more careful they are about how they express themselves. Heisenberg was a great scientist, and so he was generally careful. His acolytes, and especially those who use him as a club with which to beat their enemies, not so much.) And to honest, as mentioned above, it’s not like I understand the math or even the finer points of the experimentation that is claimed to lead to these assertions. What I do understand is that math is not reality, however useful and even indispensable math may be to our understanding and using of the world.
In his book on the philosophy of statistical analysis Uncertainty: the Soul of Modeling, Probability and Statistics (which I still need to reread and review here! Time eats life, as some French dude once said) Dr. Briggs takes great care to distinguish between epistemology – how we understand things – and ontology – how things are. Applied mathematics belongs to the world of epistemology. I am reminded of a section of the Feynman lectures where he pauses after having filled a couple large blackboards with equations to note that it sure took a lot of math to describe what was, essentially, a simple motion, and that nature in doing what it does certainly isn’t doing all that math.
And, for me, that is the point. Just because quanta are nigh impossible to see and measure and appear to behave in incomprehensible ways doesn’t mean that their states are not caused, nor that they are anything other than what they are regardless of what we are able to deduce about what they are. It is a radical and unnecessary step, and contradicts the minimalist approach embodied in Occam’s Razor, to assume a new principle: that there are classes of uncaused phenomena, not just phenomena the causes of which we don’t yet understand.
The discussion on Dr. Briggs’ blog is far more nuanced and deep than my feeble understanding. One part I do understand, and which is commonly discussed on this blog: Insofar as science actually advances, they are following Aristotle and not any of the post 1630 philosophers. (1) Hylomorphism – the understanding that any object in the real world that we can consider is made up of form and matter – is, of course, how science routinely understands the world, even if the terminology has been beaten out of it. Modern science desperately wants there to be material and efficient causes only, and so does its best to pretend that there are no formal or final causes. This results in the absurdity of saying, for example, that a bird’s wings are not *for* flying, that it is not possible to describe them in terms of how they are to be used.
Of course, nobody talks this way except when pushed to the wall. But our analytic philosopher comrades, living on the cutting edge of the early Enlightenment, must insist that we don’t know and can’t meaningfully talk about formal and final causes lest we fall into the trap of *gasp* metaphysics. Can’t have that. Can’t live without it, either, but that just makes them mad.
Anyway, the most fascinating idea:
Additionally, hylomorphism entails a gradual spectrum of material beings with greater degrees of potentiality to greater degrees of actuality. Something has greater actuality if it has more determinate form (or qualities) and something has higher potency if it is more indeterminate with respect to being more receptacle to various forms. For example, a piece of clay has higher potency insofar as it is more malleable than a rock and thus more receptacle to various forms. A rock can likewise be modified to receive various forms, but it requires a physical entity with greater actuality or power to do so because it has more more determinate form as a solid object… [H]ylormophism predicts that you will find higher levels of potency because you are getting closer to prime matter. This is precisely what we find in QM. The macroscopic world has more actuality, which is why we experience it as more definite or determinate, whereas the microscopic world has far less actuality, thereby creating far less determinate behavioral patterns.
Briggs quoting Gil Sanders “An Aristotelian Approach to Quantum Mechanics” (which I haven’t read yet, but will). My paraphrase: the higher up a thing is on the scale of being – the more ensouled, the more natural in the sense of having a fuller nature – the more primary is form. The lower one goes, the less primary is form. Thus I am a human animal, among the most natural objects in the universe, one where over my 60 years has had pretty much all the matter in my body swapped out one or more times. Yet no one sane doubts that my form – human animal – has persisted through all those changes. Once we get down to barely perceptible objects, we barely are able to perceive their form at all – all we can see are the mysterious undulations of prime matter as various forms subsume it. And this is what an Aristotelian would expect: less or lower forms, less nature, less definition.
Mind blown. I’m going to need to think this over a lot.
- 1630, more or less, is the year Descartes retreated to his room, drew the curtains, contemplated his navel and started producing the anti-Thomist philosophy that spawned all the crap since. I wouldn’t object to using 1517 as the real start date, but it’s Easter Week! We’re playing nice!
One thought on “Actual versus Potential: Aristotle and Quantum Probability”
Yowza. That’s really interesting.