The New AI God Botherers
Neil Gaiman, anyone?
Last week I was supposed to be thinking about AI, and instead I was thinking with AI. My focus is both near and far. Today I will write the far-sighted view - thinking about machine intelligence rather than using it to get things done. This thinking about comes from deep within and from two artistic metaphors. First, what I really believe.
The Machine Has No Soul
So we are allowed to kill it. It is a flock of sheep. It is a field of corn. It is a tool shed and a factory, but it is complicated, fast and to the overwhelming majority of us, impenetrably inscrutable. People ask if it is conscious. It is not, but it has behaviors and we will invent customs around it, we practitioners. Yesterday I learned something about the Romeo flag at half mast. It means a Navy ship is preparing to be replenished while underway. Ships are named as females and they are complex beasts, as are steam locomotives. That doesn’t make them conscious. They have their ways, and if unattended doing what they do, they will inevitably do it wrong, sometimes in ways that are deadly for innocents and practitioners alike. The seas are littered with shipwrecks. It neither denies the justification for calling them ghost ships nor confirms that they actually are. That’s just the way of ships. They have no souls. They merely affect the souls of those who know them, use them, admire them and the world of change they bring to humanity. We no longer live in the Age of Sail, but we might do well to recall when we did. The parallels are absolutely consistent with the desires and ultimately the requirements of Western Civilization.
As such it is a fundamental error to worship the tool. But that is what’s going on.
The Machine Is Intelligent
Machine intelligence is a crafted attribute of the efforts of humans to make it so. It is reasonable to accept that it reasons. That is simply because human language encodes knowledge about the world. It comes therefore as no surprise that there are ways that accurate representations of the world can be expressed by machines. For heaven’s sake we’ve been doing it with film for a century. AI can understand text. It can understand video, speech, still images, sound and the whole electromagnetic spectrum and haptic universe. It probably will not be able to taste for quite some time, but it can sniff out chemical combinations than neither dogs nor humans can perceive.
We will use this intelligence to make sense of things we don’t understand and make finer sense of things we do understand. We will engage it interactively as students and as teachers, and sometimes as horse trainers and horses. In short, it will share some fraction of our duties of intellect. Speaking for myself, I’m about 60/40 when it comes to coding and I haven’t the budget or skills to make it do things that take more than about an hour. I’m about 98/2 when it comes to writing, which is, I hope, the way you like it.
Knowledge Is Not Power
The corollary to this is that power is power, and the power in Western Civilization goes to those we call our gods. This gets complex but it’s important to get what I’m getting at, because while the other two axioms of my thinking are relatively straightforward this one is not. There’s stuff at length to say about this but I want to keep it as simple as possible.
A. Gods are socially powerful because of faith. The more faith you give a god, the more power and attribution it gets. By definition that which is godlike is superior to humans, transcendently so, the way that rockets are to walking. Beyond any capability we would possibly have in nature, what bazookas are to throwing rocks.
B. All systems, machines, thoughts and philosophies are constrained in their totality by Gödel’s Incompleteness. Any system of knowledge can approach completeness at the cost of consistency, or approach consistency at the cost of completeness. That gap is where all of the catastrophic bugs and gremlins sneak in and destroy it. You can control the world but only inconsistently. Conversely you can only perfect parts of the world with consistency. However, you can have faith in an incomplete or inconsistent system. This leap of faith is the practical definition of accepting a system in its totality. Naturally, this is the nature of totalitarianism.
Where I’m going with this is straight to the heart of superintelligence which is one of the engines pulling the hype train to our visions of utopia. You have to have faith in the machine. You have to pay tribute to the system. You have to make sacrifices and take loyalty oaths and pledge allegiance. This is the Way.
The Machine Has to Eat
And consequently we have to feed it. It therefore falls, like every other system that outproduces humans at anything, into the real world of scarcity and constraint. I’m also implying with Point A above that we not only have to feed it, we have to heed it. What good is a library in the middle of a corn field or sheep meadow if it has no books about corn or sheep? What good is the rule of law, settled for centuries when people decide to disobey it? What good is intelligence by the same token? We could spend years watching Hanna-Barbera cartoons in the film section of the Library of Congress and ignore all the rest of the intelligence. We could ignore intelligence altogether and simply rely on faith, rumor, innuendo, hearsay, superstition or our feelings in the moment. Don’t we?
The machine needs programmers, and it needs prompting because it is not self-motivated. It might very well randomly pick a purpose, and some LMs will be built specifically to do just that. Recall that the invention is already out there. We keep talking about the fables of the frontier models, but there are other off-brands and also rans that eat less, are fed, are less intelligent and have even less soul than the pretense of personality. We are in the garage band era of the new music for teeny-boppers.
Our Capacity for Error
Everybody knows that toddlers at the age of two understand the meaning of the word no, and disregard that meaning altogether. They require constant surveillance. So too will all manner of language models and their agentic enabling software & controls. The OpenClaw teeny-boppers have already volunteered their lives away and some will tell tales of incredible revenue streams and others have carpet bombed their families by spam machines of their own inadvertent creation. I can’t tell you what the equivalent of AI butt dialing will be called, but we’ll have a term in short order. There inevitably will be correct ways of organizing and harnessing artificial intelligence. There inevitably will be errors and exploits. Shit happens.
Speed Kills
At last here is the thing I believe. We in the West are particularly interested in improvement within the context of our societies and civilization. We are externally focused and it’s difficult to tell us that Citius Altius Fortius isn’t the path to excellence. Most of us accept that we are in a clash of civilizations against China to become first in the race for AI Supremacy. It that sort of contest there is something we are forgetting which is rather obvious to me.
In all of language model processing, most significantly when it comes to pricing its usage, it’s tokens in and tokens out. The more you use the more you pay. From my way of seeing things we have hype primarily because if our best models were performing around 20 tokens per second, nobody would be impressed. Nobody would be investing millions. But at 80-150 tokens per second, this approaches the response time of human conversation. I get a paragraph back faster than it took me write the question. We even have a metric called TTFT, time to first token, or the equivalent of a silent “Hmm. Let me think about that.”
Grok sez:
Quick rules of thumb for 2026 conversational use
≥ 100–120 tokens/second → most users rate the experience as “fast” or “very responsive” for everyday chat.
60–100 tokens/second → still feels okay / natural to the majority of people, especially if time-to-first-token (TTFT) stays under ~0.8–1.2 seconds.
< 50–60 tokens/second → starts feeling “slow” or “laggy” to most users in back-and-forth single-sentence exchanges. This is where complaints become common (”why is it thinking so long?”).
We associate, like it or not, speed with intelligence. We can’t wait overnight for the answer to the Ukrainian question. We want to know now. We want it explained like we’re five. We don’t want wisdom, we want brain spew, factoids, quips and hot takes. We don’t want meditation, we want media.
The New AI God Botherers
These are the high priests of the new generation poised to cure cancer, halt global warming, reduce the side effects of bubble butt surgery and redistrict the state of Indiana in order to change the binary to our favorite color. Their calling card is ‘Trust Me’. Of course the most clever of them know that the most revenue comes from the largest market, and the first one to cross some imaginary finish line to incremental utopia will gain market share. So they are racing for mindshare now.
The Achilles heel in this strategy, which inevitably has to be fast and furious, is that moats have to be built around business models. One cannot simply wait to build something more perfect, because in fact nobody knows what more perfect means. We simply know what it means to be first, or to be brilliant, or to cheat. That’s the way the money goes. Lot’s of money. You already know the players, and you may have even placed bets on your favorites. But know it’s not only a technology bet.
Gödel’s Incompleteness is unavoidable. So you must have faith. How do you declare that AI is here? That AI is real? That AI will be everywhere? That AI will change your life for the better if you accept it as your personal savior? This road is well-traveled. Expect the same scenery. The same exaltations. The same sour disenchantments. The same shameless exploitations. That’s not the fault of the technology of artificial intelligence. That’s the fault of how we faithfully feed and heed the beast. I fear no rogue AIs as much as I fear rogues. We can fight software with software. We always have and we always will, but the rule of law and the wisdom of ages cannot keep up with the art of the deal. Just remember I told you so.
—
Stalker
Two years ago I read The Roadside Picnic. I didn’t realize until yesterday that Andrei Tarkovsky had made a film about it. In my first corporate job I fell in love with the film The Sacrifice and I don’t think I ever ventured into Tarkovsky again. I was too young and ambitious to have time for the morose sadness of such films and books like those of Márquez popular at the time like Love in the Time of Cholera. Even the titles turned me off. What kind of sad, worn out person would even think of reading One Hundred Years of Solitude? At that time I was a conquering soul, but I never forgot the slow burn of Tarkovsky. So two days after my recent medical procedure, with recovery time to kill, I watched Stalker.
I didn’t know what to expect but I was enthralled by the opening scene seen trolling YouTube for a good film to watch. Then the whole thing happened, slowly. Impressionistically. You absolutely must see this film, even again. It’s meditation of the wisest, fuzziest sort necessary for a step into the unknown. I cannot think of a better metaphor for the unknown future than this film. It comes down to this scene as the three men arrive at the Room which grants a person’s innermost desires.
Pretty much nails it. This, from 1979 when we had other existential problems in mind.
Now I want to drill down a a particular piece of the dialog that captures something that is always on my mind in the face of outrage panics in the American pantos of derangement. It is the following line from the script (downloadable here).
Oh, stop it, stop it! A single person cannot possess such hatred or, let’s say, such a love ... that could be passed over the entire mankind! One thing is money, woman, or revenge that your boss would be hit by a car. It’s a trifle. But to rule the world! The right society! Kingdom of Heaven on the Earth! It’s actually not a wish, but an ideology, measures, conceptions. Unconscious compassion is not yet able to come true.
— The Writer, Stalker
In this regard we should measure the hype and despair over AI in terms of the desires and wish-fulfillment of the people who have agendas to do anything but be practitioners. Bernie Sanders, for example, wants to halt any construction of new data centers. Various skeptics and cynics have their angles. Of course Sam Altman is the snakiest of oil salesmen in my estimation. Some cat named Diamandis is recording podcasts with little lobster plushes — he’s a maximalist and rather playfully shrill. He reminds me of nothing more than those guys who used to try to sell us the next big cryptocurrency. How’s that working out? Once more, pay attention to the hype but don’t take it seriously, good or bad. What matters is the underlying science and the direction of the engineering. Engineers are the bottleneck. It’s not going to be everything, everywhere all at once. More on that later.
The other artistic metaphor is Neil Gaiman’s American Gods.
“People believe, thought Shadow. It's what people do. They believe, and then they do not take responsibility for their beliefs; they conjure things, and do not trust the conjuration. People populate the darkness; with ghosts, with gods, with electrons, with tales. People imagine, and people believe; and it is that rock solid belief, that makes things happen.”
― Neil Gaiman, American Gods




This is good, needed analysis. We need experienced coders to interpret the details of the incoming rush. I look forward to more essays to help us laypersons place things in perspective. I can handle the bull/mystical case (smile). I need to read more skepticism, more of the bear case. Once again, thanks for this contribution to ever changing times.