Henry Kissinger (yes, he’s still alive – 95 yrs old. His dad made it to 95 and his mom to 98, I think, so he may be with us even longer.) has opined that we’ve got to do something about AI:
Henry Kissinger: Will artificial intelligence mean the end of the Enlightenment?
Two thoughts: Like Hank himself, it seems the Enlightenment is, surprisingly, still kicking. Also: End the Enlightenment? Where’s the parade and party being held? Oh wait – Hank thinks that would be a bad thing. Hmmm.
Onward: Dr. K opines:
“What would be the impact on history of self-learning machines —machines that acquired knowledge by processes particular to themselves, and applied that knowledge to ends for which there may be no category of human understanding? Would these machines learn to communicate with one another? [quick hint: apparently, they do] How would choices be made among emerging options? Was it possible that human history might go the way of the Incas, faced with a Spanish culture incomprehensible and even awe-inspiring to them?”
Note: this moment of introspection was brought about by the development of a program that can play Go way better than people. Little background: Anybody can write a program to play tic-tac-toe, as the rules are clear, simple and very, very limiting: there are only 9 squares, so there will never be more than 9 options for any one move, and no more than 9+8+7+6+5+4+3+2+1 possible moves. A simple program can exhaust all possible moves, dictate the next move in all possible scenarios, and thus guarantee whatever outcome the game allows and the programmer wants – win or draw, in practice.
Chess, on the other hand, is much harder game, with an effectively inexhaustible number of possible moves and configurations. People have been writing chess playing programs for decades, and, a few decades ago, managed to come up with programs sophisticated enough to beat any human chess player. Grossly put, they work by a combination of heuristics used to whittle choices down to more plausible moves (any chess game contains the possibility of any number of seemingly nonsensical moves), simply brute-force playing out of possible good choices for some number of moves ahead, and refinement of algorithms based on outcomes to improve the heuristics. Since you can set two machines to play each other, or one machine to play itself, for as long or as many games as you like, the possibility arises – and seems to have taken place – that, by playing millions more games than any human could ever play, measuring the outcomes, and refining their rules for picking ‘good’ moves, computers can program themselves – can learn, as enthusiasts enthusiastically anthropomorphize – to become better chess players than any human being.
Go presents yet another level of difficulty, and it was theorized not too many years ago to not be susceptible to such brute-force solutions. A Go master can study a board mid-game, and tell you which side has the stronger position, but, legendarily, cannot provide any sort of coherent reason why that side holds an advantage. The next master, examining the same board, would, it was said, reach the same conclusion, but be able to offer no better reasons why.
At least, that was the story. Because of the even greater number of possible moves and the difficulty mid-game of assessing which side held the stronger position, it was thought that Go would not fall to machines any time soon, at least, if they used the same sort of logic used to create the chess playing programs.
Evidently, this was incorrect. So now Go has suffered the same fate as chess: the best players are not players, but machines with programs that have run through millions and millions of possible games, measured the results, programmed themselves to follow paths that generate the desired results, and so now cannot be defeated by mere mortals. (1)
But of course, the claim isn’t that AI is mastering games where the rules clearly define both all possible moves and outcomes, but rather is being applied to other fields as well.
After hearing this speech, Mr. Kissinger started to study the subject more thoroughly and learned that artificial intelligence goes far beyond automation. AI programs don’t deal only with the rationalization and improvement of means, they are also capable of establishing their own objectives, making judgments about the future and of improving themselves on the basis of their analysis of the data they acquire. This realization only caused Mr. Kissinger’s concerns to grow:
“How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?”
“Capable of establishing their own objectives” Um, what? They are programs, run on computers, according to the rules of computers. It happens all the time that following the rule set, which is understood to be necessarily imperfect in accordance with Gödel’s incompleteness theorems, computer programs will do unexpected things (although I’d bet user error, especially on the part of the people who wrote the programming languages involved, is a much bigger player in such unexpected results than Godel).
I can easily imagine that a sophisticated (read: too large to be understood by anyone and thus likely to be full of errors invisible to anyone) program might, following one set of instructions, create another set of instructions to comply with some pre existing limitation or goal that may or may not be completely defined in itself. But I’d like to see the case where a manufacturing analysis AI, for example, sets an objective such as ‘become a tulip farmer’ and starts ordering overalls and gardening spades off Amazon. Which is exactly the kind of thing a person would do, but not the kind of thing one would expect a machine to do.
On to the Enlightenment, and Hank’s concerns:
“The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.”
Anyway, go watch the videos at the bottom of the article linked above. What you see are exactly the problem Dr. K is worried about – “AI developers, as inexperienced in politics and philosophy as I am in technology” – although in a more basic and relevant context. The engineer in the videos keeps saying that they wrote a program that, without any human intervention and without any priming of the pump using existing human-played games of Go, *programmed itself* from this tabla rasa point to become the (machine) Master of (human) Masters!
When, philosophically and logically, that’s not what happened at all! The rules of the game, made up by humans and vetted over centuries by humans, contain within themselves everything which could be called the game of Go in its logical form. Thus, by playing out games under those rules, the machine is not learning something new and even less creating ex nihilo – it is much more like a clock keeping time than a human exploring the possibilities of a game.
The key point is that the rules are something, and something essential. They are the formal cause of the game. The game does not exist without them. No physical manifestation of the game is the game without being a manifestation of the rules. This is exactly the kind of sophomore-level philosophy the developers behind this program can almost be guaranteed to be lacking.
(Aside: this is also what is lacking in the supposed ‘universe simply arose from nothing at the Big Bang’ argument made by New Atheists. The marvelous and vast array of rules governing even the most basic particles and their interactions must be considered ‘nothing’ for this argument to make sense. The further difficulty arises from mistaking cause for temporal cause rather than logical cause, where the lack of a ‘before’ is claimed to invalidate all claims of causality – but that’s another topic.)
The starry-eyes developers now hope to apply the algorithms written for their Go program to other areas, since they are not dependent on Go, but were written as a general solution. A general solution, I hasten (and they do not hasten) to add: with rules, procedures and outcomes as clearly and completely defined as those governing the game of Go.
Unlike Dr. Kissinger, I am not one bit sorry to see the Enlightenment, a vicious and destructive myth with a high body count and even higher level of propaganda to this day, die ASAP. I also differ in what I fear, and I think my reality-based fears are in fact connected with why I’d be happy to see the Enlightenment in the dustbin of History (hey, that’s catchy!): What’s more likely to happen is that men, enamoured of their new toy, will proceed to insist that life really is whatever they can reduce to a set of rules a machine can follow. That’s the dystopian nightmare, in which the machines merely act out the delusions of the likes of Zuckerberg. It’s the delusions we should fear, more than the tools this generation of rootless, self-righteous zealots dream of using to enforce them.
- There was a period, in the 1980s if I’m remembering correctly, where the best chess playing programs could be defeated if the human opponent merely pursued a strategy of irrational but nonfatal moves: the programs, presented repeatedly with moves that defied the programs’ heuristics, would break. But that was a brief Star Trek moment in the otherwise inexorable march forward of machines conquering all tasks that can be fully defined by rules, or at least getting better at them than any human can.
Related, but in a backwards way.
I recently figured out why the “Chinese Box” proof that computers can’t become artificially intelligent doesn’t work– that’s the thing where the example of J Random Dude in a room with a big book of Chinese symbol groups looks through the numbered list, pulls out the box that matches the number and passes it out a door; he doesn’t understand Chinese, same way a computer using a program to do that same thing (but faster!) doesn’t.
The problem is, that doesn’t touch on what someone might eventually figure out what to do*; it just proves that the AI that can play Go! isn’t an intelligence, artificial or negative; they’re an artificial intelligence, as in can provide a facsimile of the thing. Think like the difference between a lab emerald, a natural emerald, and a glass emerald; great, we can make glass emeralds. They’ve been doing that for ages. Doesn’t mean that a lab emerald was right around the corner the whole time, because there’s a difference in kind between “make an emerald in the lab” and “make glass that can look like an emerald.”
* Philosophically speaking, I think it’s immoral to try to make a genuine intelligence by artificial means; it’s like making a test-tube baby from artificially encoded genes, you’re deliberately creating an orphan. An orphan without even a SPECIES.
(Mac) “What’s the harm in that?”
(Jared) “What’s the harm in any illusion? If we grow accustomed to sims indistinguishable from humans…”
“We’ll start treating sims as if they were real people. But how can that…?”
“No, Mac. The danger is that we’ll start treating people as if they were sims.” \
— from “Places Where the Roads Don’t Go” by Michael F.Flynn
Loved that story.