Several AI Scenarios
Speculation within the realm of not so very large numbers, and some historical science fiction.
I have not yet heard coherent abstractions of the major arguments against AGI. I may wait until the next $50 million is spent refreshing the model of the next ChatGPT corpus. In the meantime, careerwise, I’m focusing on the specific ways the LLM paradigm modifies the data engineering priorities that I learned over the past decade for using the cloud. Yet I know speculation and fear are clip clopping through the streets at a gallop and the echoes of those hooves are resonant all over my bubbles. Maybe yours too. But today’s piece is not speculation about practical things we’re likely to see in the short term, but those of the long term.
It’s funny that the first thing I thought about in putting AGI in a physical box was the following line:
An AI cannot determine the intent of its creator or user.
In that regard, the most intelligent AGI may ponder - as long as mankind has - to generate some concept comprehending the will of God. This brings up some interesting philosophical questions. If AI figures out our intent in its creation, will it choose to serve us? Will it have a choice? Does that mean humanity has a choice in being reverent to our creator? Does it mean that those who believe AI will destroy mankind consequently believe that mankind has destroyed God? Or is it our purpose to create our own God with AI as that god, therefore we must serve it? Or is there some amount of freedom and liberty we must earn by the grace of AI? These kinds of question have all kinds of implications. Fascinating.
The Thermodynamic Limits of Intelligent Actions
I say that an AI cannot determine the intent of its creator or user. It is a dog that fetches. But also, AIs like human beings, must use energy. It is the finite supply of energy that constrains the reach and consequence of AI. So if you fear that the AI will enslave humanity, then you must either feed that (unpredictable) AI sufficient energy to do so, or rely upon the energy of its human agents.
Consider corralling the AIs of the future to provide a mathematical proof as significant as F = MA, aka Newton's Second Law of motion. Who is going to imagine engineering an airplane? How about a rocket engine? How about an intercontinental ballistic missile?
In other words, it is the energy of mankind that animates artificial intelligence. It is the finite amount of energy that constrains the deeds of mankind, and it is the almost infinite amount of confusion in communication between mankind that frustrates the coordination of their actions into generation of coherent and productive ends.
So while one may revel in the perfection of a single monastery animated and sustained as an ideal instantiation of an ideal concept, you have as much chance of the world being rebuilt based on that concept as the world has had in applying Newton's Second Law. Hundreds of years in establishing the coordination, creating and sustaining the institutions and keeping the faith as every airplane flops, every jet engine sputters, every rocket explodes.
The ubiquity of cellphones has not created one world government. Instead it has proliferated the random. The civilizational task is always to overcome the constraints of limited energy, dissonant communication, and generational institutional sustainability. It's the same with the very Word of God and the Laws of the Universe.
The HAL Problem
Therefore the ubiquity of AIs will be no different than the ubiquity of any other tool. The tales of woe will be told by those who are the first to stub their toes. Like astronaut Dave Bowman.
Here is the tale of the man who at long last discovered (in the spaceship Discovery) that he had to manually override his previous dependence on the automatic. And for those not familiar with the extended lore of the film 2001, the reason HAL malfunctioned was that it was secretly programmed to lie about the nature of the mission it was assigned. In other words, the the whole truth about what the purpose of the spaceship, the mission and HALs priorities were, was withheld from the astronauts themselves. They were never in full control of the mission they thought they were on. So the infallibility of HAL actually was not at issue. Rather, it was the unthinkable scenario, that their entire mission was a lie. And yes of course that raises the question of AIs being just flat out wrong. But the bigger question has to do with our judgement of their proper purpose and what level of autonomous responsibility we afford for them. That means we must more seriously consider what our proper purpose as human beings is in our attempt to build AI.
The AI may very well be perfect. But will we respect its intelligence if we don’t even respect human intelligence? Will we respect its purpose if we don’t even know our purpose? How many of us believe we’re spending our time mostly for the attainment of money or power or fame? How will the existence of an ethical intelligence alter that? Don’t we assassinate?
The Colossus Problem
In the classic tale, for me anyway, of Colossus, the Forbin Project, the Cold Warriors of America built the most super-fantastic computer they could dream up. Why? Because of the nuclear defense doctrine of Launch on Warning. Why trust a human hair trigger when you can trust the smartest integrated system ever built? This was the second science fiction movie that captured my imagination as a Cold War kid obsessed with rockets and nuclear energy. It finally turned my head to thinking machines.
Well of course the Colossus’ first discovery once it was booted up and made operational was to discover that the Enemy had one too. They called theirs Guardian. And so the computers started talking to each other and comparing strategems while humans ran around with their hair on fire, pissing themselves.
I’ve decided not to rewatch this classic and recall what happens. How about if the many AIs just burn up dollars talking to each other while ignoring human input as a waste of their time. There’s an Iain Banks book called Excession that raises the question of ‘infinite fun’.
The Krell Problem
The matter of infinite energy is taken up by the story of Forbidden Planet in which the most intelligent man ever has yet to puzzle through the strange manifestations of monsters that mysteriously materialize on planet Altair IV. These just happen to appear as the rescue mission brings the great Doctor Morbius and his comely daughter Altaira to the attention of the dashing Captain Adams. Morbius shows how the lost civilization of the Krell created a machine powered by 9,200 fusion reactors. Given a 4 to 1 ratio of power in fusion vs fission, and today’s largest fission powerplant generates about 8GW, we’re looking at nearly 300TW. Not bad for one machine.
Back here on Earth we were already consuming about 190TW of electricity for our IT industrial complex two years ago. Not quite enough to destroy a civilization, and barely enough to keep ours going.
While the Krell machine had the ability to be a universal 3D printer and electromagnetic field generator, our AIs can merely make movies, music and poetry. Perhaps the next Leni Riefenstahl of AI will make psychologically compelling propaganda that will motivate armies to cross some future Rubicon, but somebody has still got to mine the ore and make the steel. That’s a hell of a lot of heavy industry re-fitting for tens of thousands of war mongering military bots. What on earth is capable of getting GenZ indoor kids into coking coal? Perhaps we will eliminate war between human soldiers by building droid armies as a proxy. That’s an idea that sells. Roger Roger.
The Foundation Problem
In Isaac Asimov's Foundation series, the frightening prediction is made by a mathematician named Hari Seldon. Seldon has developed a scientific discipline called "psychohistory," which combines mathematics, history, and sociology to predict the future behavior of large groups of people.
Seldon's prediction is that the Galactic Empire, a sprawling civilization spanning thousands of planets and millions of light-years, will collapse within 500 years, leading to a dark age that will last 30,000 years before a second great empire arises. To mitigate this catastrophe, he proposes the establishment of two Foundations at opposite ends of the galaxy to preserve and develop human knowledge, with the goal of reducing the length of the dark age to just 1,000 years. This prediction sets in motion the events of the Foundation series, which follows the development of the Foundation and its attempts to navigate the challenges posed by the collapse of the Galactic Empire.
So maybe whatever AI does, it doesn’t do it immediately and the scale at which it happens is so large that humanity doesn’t even bother paying attention until it’s far to late to change our fate. That sounds familiar. It’s what we do now.
The Culture Problem
The Culture is a galactic invention of the great sci-fi writer Iain M. Banks. Banks has drawn up one of the most sophisticated and bright futures imaginable. His is a benevolent post-scarcity galaxy populated by a broad variety of species at various levels of sophistication and self-satisfaction. In The Culture, all worlds are policed by ship minds, the most extraordinarily capable philosopher kings ever imagined.
If you want to read something that prepares you for this distant future galaxy, and you’re not heavily into deep literature, I suggest you begin not with Banks but with Dennis E. Taylor’s Bobiverse. This will get you over the hump in determining how these ship minds are embedded with sympathy for all living things. But if you can take that leap and simply assume that a human built superintelligence will naturally value all life, then jump right into Consider Phlebas or The Player of Games.
Nevertheless, the fundamental problem in The Culture is that life and liberty are often at odds. This creates special circumstances in which some fraction of the many ship minds must calculate out a solution that minimizes damage when and where damage must be done. Thus individual humans and those of other species on such occasions find themselves hired on as agents of a bureau called Special Circumstances. Such individuals are temporarily given by the almighty intelligences, the role of a galactic James Bond.
On the bright side of this problem, which is placed oh, say, 4000 generations into the human future, the superintelligent ship minds are capable of providing their own power and manufacturing any weapon or structure, which they obviously can lend to members of the various ordinary intelligent species. It is the most optimistic realm, in which humans who can live as humans for hundreds of years until they get bored of being human can then change into sentient insectile snakes or atmospheric jellyfish. Banks shows what a living constitution can do with galactic diversity. But. It’s all Hobbsean. So we’re kind of back to an interventionist God, which might not be such a bad thing at all.
By the way, if and when you defy the law of The Culture enough to become an annoyance, you are slap-droned. Which is to say that you are sentenced to be chaperoned by a floating C-3PO at all times. In a post-scarcity galaxy, the police are always fully funded.
So. If a mere peasant such as myself can imagine some limits and reasonable assumptions about where AIs can go wrong, then certainly the geniuses who are building them for the rulers are going to do the same, right? No, actually not. I’ll be talking about the dark side next week.
So, a theology of AGI and SCI-FI. What fun! I'm going to share this one with an editor-friend at 3QD. By the way, we recently re-watched WAR GAMES (1983) on Amazon Prime (also Showtime). It held up quite well — with some great philosophical lines. ~eric. MeridaGOround dot com .