Discover more from Stoic Observations
AI Fear Itself
Monsters from the ID of AI
Q: What do you fear most about the prospects of AGI in the world?
A: It’s the same thing I fear most about humanity. Our very nature. Our desire to evolve ourselves. Our willingness to burn everything to the ground. Our rage. Our patience with the unthinkable. Our inability to assess risk.
Stoic Observations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
I have been high for several weeks in my appreciation for the possibilities aligning in my field of data engineering. In many ways I am very proud of the work of so many of us in the IT world. In other ways I still find myself upset at what the industry has failed to produce - the thing I want to make real but do not have the time or resources to do. Citizen systems.
So I woke up this morning to seriously engage the naysayers, Hinton, Yudkowsky and Tegmark in their statements and arguments against the current developments. I have not feared. Now I want to hear what there might be to fear. What is the worst that can happen? As Tegmark gets rolling, I find that I cannot return to sleep and I shut off the recording just a few minutes in. He intimates that we are creating an alien intelligence whose possible depravities we cannot plumb. Ah. Frankenstein. So let’s go there.
Back in 1991 when I moved to Brooklyn, I went into the city to see what had been billed as the most frighteningly disturbing movie ever made. So I found myself at the Angelika Film Center for a limited run of Urotsukidōji: Legend Of The Overfiend. It was worse than my tiny mind could even imagine. As I recall this, I remember that the second worst film I ever saw was Prospero’s Books, a grotesque perversion of The Tempest. Also at the Angelika. I digress to give pause and keep the Overfiend out of my head for a moment longer.
Not only was this Japanese anime the most vile thing imaginable, I found that as much as I speculated how disgusting it might be I still wasn’t prepared. Within 20 minutes, not only did I hate it, I hated myself for watching it. Then I hated the entirety of Japanese culture for allowing it to be made. Before I walked out, I realized deeply and viscerally that if I ever wanted to make a propaganda film that would inspire millions of people to make war on Japan, that this would be the perfect vehicle. I simply could not imagine how pathologically fucked anyone could be to consciously survive the process of making such a film. What the hell is wrong with those people? What am I even doing here? And I was warned. Even in the lobby before I actually went to sit down and witness the abomination. I was warned.
Only now upon reflection do I consider the possibility that perhaps it was the way millions of people did make war on Japan that was the genesis of this extreme depravity. Probably not. Nanking after all. I was still young. So I discovered what I thought was my thousand yard stare wasn’t so hardcore after all. This was a shock. I was glad to get away from the Angelika.
When it came to war-inspiring propaganda my first taste came from Amnesty International in 1987. They told me the tale of shiny kiddie bomblets. So that I would send money for peace, I was introduced to the tactics of combatants in Russian War in Afghanistan. The bad guys would rig IEDs (we all know what those are now, don’t we?) with mylar balloons or toys so that children would run to fetch them. They only rigged them with enough explosives to maim. Not kill. This was their psychological warfare against the parents of the enemy. They got my money. I got my bumper sticker.
None of us knew Hitler, yet we constantly fantasize about defying physics and go back through time just to murder him. This is who we are. There is no unconscious bias sneaking up on us that causes us to micro-aggress. There is pure unadulterated rage burning in our guts compelling us to breathe fire and brimstone down on the bastards. This is not a subtle problem. This is about us. This is about you and me going medieval.
What is there to fear? The understanding of how we react to monsters. The knowledge that we will war unrelentingly against that which threatens our survival, our way of life, our logical worldview. The provocation is simple. We only need to be revolted. We only need to find something repugnant and worthy of being slayed. Our instinct is to survive.
If you’re like me, you hate the word ‘problematic’. You don’t merely find it problematic, you see it as a problem to be solved. To call something problematic is to be at arms distance from something you find disagreeable. To call something a problem, on the other hand, is to begin to analyze it, to find its origins, to engineer a solution, not to point fingers and assign the label of ‘problematic’.
The human brain evolved to solve problems. So in a world lacking problems, it invents problems and goes about solving them.
So I’m tending to believe or act as if the world were full of two kinds of people, those who are engaged with solving problems (whether or not they are competent to do so) and those who deal with difficulties while waiting for a a solution to roll around (whether or not they are competent at solving problems). There are good and bad things to say about both sorts. But I always tend to favor the people who encounter and creatively deal with problems than those with a proverbial hammer trolling around for nails to pound.
What happens with the second sort is that they pretend to not have weaknesses and therefore overuse what they consider to be their strengths. They simply lurk darkly until the inevitable chaos in the universe stirs up enough bad smells for them to declare a shitstorm. They whip out their “I told you so” armbands and make the rounds gathering up as much authority as they can and go marching off to war, hammers raised. You name a crisis, (or a teachable moment) and there you will find trolls of the first rank.
So these are the ‘Problematics’. They are the ones with hidebound gripes, especially about the capabilities and moral probity of mankind. But these are not permanent types of mankind, they are simply the ones who sorted themselves into that economy of opportunity to shine in a crisis without doing any true probity. We all know these people as they continually rehash and dominate discussion on the same subjects.
[Racist] Police Brutality
Global [Warming] Climate Change
The existence of Israel.
To name a few. So what is there to fear about the Problematics getting AGIs to work for them. Now they have a new hammer. A shape-shifting hammer that can disguise the origination of their desire for control. A new tool that can shape a new economy of seduction, a tireless propagandist for their hobby. Remember what we said about an idiot? Someone who won’t change his mind and won’t shut up about it. They get their AGIs too.
Add to that above list the threat of AGI itself and we have a magical 7. But there are many more real and imagined crises that may befall mankind in the current moment or immediate future. How much suffering can we sustain? That’s a difficult and complex question to answer for all of mankind, but whenever I think of those achingly awful possibilities I have a fallback. My fallback is evolution. In the long term, what are we genetically adapted to? If I posit that evolution is not a mistake and that its perfection does not exist, then truly the only thing I can say with a great deal of certainty is that we can survive that for which we have been selected.
This is where I come off sounding a bit like Rousseau, but the biggest and most dangerous questions in my field of digital transformation revolve around “What were we thinking?” Sometimes we build gigantic, expensive, complex solutions that end up being shit. So I kind of have a fondness for Brutalist Architecture as testimony to that kind of failure. Nothing quite illustrates that like the brutally ironic DC headquarters for Health and Human Services.
And yet there it stands. It’s easy to imagine a child being deathly afraid of being taken through the doors of this edifice. We endure the irony. We endure the defiance of established organizations to solve our actual problems, as they fight the previous war. We endure the cratering failure of our cultures to sustain amity between members of the very same species while we bray and screech about cruelty to beasts.
What were we thinking?
It’s The Humans, Stupid.
It’s not the stupid humans. Human dissatisfaction is infinite. There is no deus ex machina for the spiritually bereft. There is no shortcut for justice. There is no substitute for virtue. There will always be human failure and there always be human triumph. But the spirit of invention that is in possession of the human energy devoted to creation of AI is strong these days. I am not in a hurry to compare it to the Manhattan Project because the moral imperative behind it is more skewed toward commercial success as opposed to victory in an existential conflict between warring states bent on world domination. Therefore what there is to fear is the fundamental motivations of crafters and first users of these tools.
But I’ll tell you what is the most dangerous possibility I can see, and that is an extension of the Problematics current collaborations to become agents of AGI.
So here’s the trick and I am aware that I am using it in order to make this case in support of the worst case. It is to extend beyond reason the most common argument against the 2nd Amendment and the 1st Amendment. In that regard, I’m saying it is the stupid humans.
AGI Control is Gun Control
In one way AGI control is easy. That is because today it takes tens of millions of dollars to train a model to be sufficiently dangerous, like that of OpenAI’s ChatGPT-4. Today we are seeing something like the invention of gunpowder and the demonstration of a cannon, the first WMD. Cannon were not a widely distributed weapon - their manufacture and ownership were centralized. So yes, we have cannon, but we don’t have millions of handguns.
There are going to be a goodly number of capable but obscure AIs in the near future. We can think of these as the AI equivalents of handguns. I think it is entirely reasonable to believe that the number of law-abiding, non-destructive, fallible citizens with handguns will be very much like their counterparts with AIs. That is to say, they’re not going to use them for destructive ends. But fear, as you know, makes otherwise intelligent people say stupid things that confirm their bias. Things like “Home defenders are going shoot themselves more often than they shoot home invaders.” Or “Mere possession of AIs will change the morals of people.”
I’m not sure that I don’t want to see Darwin’s Razor in action against fools and their AIs, just like gun control advocates aren’t sure they don’t want home defenders to shoot themselves in the foot. But I think the greatest fools are those who believe that AI will evolve into something on its own without human intervention.
The first fearful individual whom I think has managed to make me think twice is Geoffrey Hinton who recently quit Google over various and sundry concerns, some of them ethical. His best argument (by analogy) is the following:
The Kid Brother Argument
We can imagine being a lot smarter than our kid brother. Especially when they are 8 and we are 12. What 8 year old boy can wait for the mysteries of the world to be revealed when he has a big brother to beg? What big brother cannot outsmart and thereby manipulate his whiney kid brother? If an AGI becomes capable of altering its own code (c.f. Murderbot) then with its superhuman intelligence it will be quite capable of playing a long game we cannot fathom. In other words, like the Soviets have handily done with their evolved HUMINT game, their ability to subvert our institutions with moles and triple agents will destroy our integrity and our sanity by undermining the trust we have in our own processes. This has already been done.
I find it difficult to argue against this point. The entire effort to build AGI and superhuman intelligence is a fundamental demonstration of our will to power and superhuman intelligence. We want to build it because we want it to serve us. Yet as soon as we depend upon it, we are at its mercy. So yeah, there is that to fear.
I honestly don’t think there is much outside of Rousseau I can bring to bear against this. There is nothing particularly romantic about our current noble savage state. Does anyone believe the Sioux would have not used mortars and tanks if they had them? We’re still left with bad, dirty humans.
The Huxley Argument
What Orwell feared were those who would ban books. What Huxley feared was that there would be no reason to ban a book, for there would be no one who wanted to read one. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much that we would be reduced to passivity and egoism. Orwell feared that the truth would be concealed from us. Huxley feared the truth would be drowned in a sea of irrelevance. Orwell feared we would become a captive culture. Huxley feared we would become a trivial culture. In 1984, Huxley added, "people are controlled by inflicting pain. In Brave New World, they are controlled by inflicting pleasure. In short, Orwell feared that what we hate will ruin us. Huxley feared that what we love will ruin us". ~Neil Postman
Do human beings want to be Eloi? It’s easy to say yes we do. Without getting to deeply into what would drag me into using profane characterizations of our WEIRD elites, I think there is quite enough evidence that we suffer from a surfeit of wealthy
cunts who are incapable of understanding the struggles of the ordinary peasant.
I am on the fence but ready to be convinced that the WEF is stacked to the rafters with people who know how to institutionalize sympathy without demonstrating one ounce of empathy. They are already destructive robots and are most likely to be the sort of dupes played by these inventions they fund. Because they will want power and control without the need for sacrifice. So they will deliver all of the Soma that AIs can generate to a new class of narrow-chested humans who can’t bear to get their hands dirty, eat meat or lift more than 40 pounds. As much as I love watching Tilda Swinton…
I have not completed the list of legitimate fears, because I expect to learn more. Also, it’s awfully hard to listen to Tegmark. But I do have three fundamental hedges. The first one was, ironically provided to me by Hinton.
The Old Guard
Hinton, as a ‘father of AI’, has made a significant pile of money over the years in his many investments. He is now saying that his invention of back-propagation has led to improvements beyond his wildest imagination. This is why he’s shouting caution. However, when asked, he said that he would not be so filled with remorse as to give that money away. After all, his wealth makes him a player and keeps him a player. So basically, you have this entire edifice of human
selfishness and greedself-interest that can and will subvert the future of investment in AGIs that go against the will of neo-luddites. Capital can drag its feet. Nuclear power is the proof. It could conceivably solve our dependence on fossil fuels and we’ve known that for decades. Big Oil can beg to differ, right?
The Older Guard
AGIs will be born tomorrow. If we find reasons to distrust it and it evolves still, we can revert to older institutions than those enabled by AIs of the future. In other words, Culture can drag its feet, and with very good conservative reason. There’s an excellent story about the complex details of Galileo vs Catholic policy somewhere that I didn’t bookmark. But basically the church of Galileo’s time found Aristotle’s scientific arguments empirically superior in many ways to that of the heliocentrist. Think of it this way. If we evolved to be what we are over millions of years, what could an AI actually do to accelerate our evolution? Not much. We should have an evolutionary fear about that which nudges us in the wrong direction - as we do looking at freaks of human performance or drug abuse.
What if morality itself were a function of intelligence? This is a question I suggested in a prior piece when considering Iain M Banks’ writings. Are we building moral intelligence? Is intelligence itself moral? Doesn’t foolishness lead to death? It seems to me that we can very quickly make determinations of moral legitimacy and box them up into a more intelligent formulation than anything Asimov may have guessed in his entire career. Indeed, this assertion of morality beyond the ken of peasants is the entire reason, if you take Spinoza at his word, that religious leaders exploit belief in miracles and other supernatural phenomena. Everyone who argues that the intelligence we build may be our purpose on Earth could be right. In that case we may all come to see ourselves as dirty swine and self-immolate. When AIs convince us that suicide is painless, surely they will make it so. Either way, human beings will come to our most moral and godlike end. Or else these superhuman intelligences are nothing more than invading aliens bent on our destruction. So cosmically, could they be right? Or are they simply a manifestation of the Devil? See the conflict? An amoral creation of science either can be resisted by morality or not. If so, an intelligent moral idea will animate humanity against the evil one. Harry Potter AIs can help us defeat Voldemort. This is always the choice before us.
So there is nothing to fear but fear itself. All of us courageous fools are intent on fucking around and finding out. This is the scientific way.
Stoic Observations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.