It has been less than six months since the public release of ChatGPT, the most popular AI on the planet, or so it seems. Already master chatters of the chatting class of geniuses are recommending ways for the average peasant to enhance his resume, correct his Python code, and/or make money. We are seduced. We are afraid. We thus are confused. Not I. I know precisely the superior force that will sedate rogue AI.
Before I get to that magic bean, I should let you know that I have been reading Martha Wells for the past month or so. I have quit her in the middle of her third book. Wells is the renowned sci-fi author and creator of Murderbot, a shy, artful and highly lethal robot who has hacked his own control module and is now liberated from slavery. So now you have to be aware that this essay is also a meditation on slavery, but the reason I quite Martha Wells is because she has not created Murderbot to be quite interesting enough. In other words, the novelty of Murderbot, a cyborg with miraculous hacking skills and organic parts that still make him emote on his face, has run out of steam. Murderbot is an interesting character, but not quite heroic enough to survive three novels, so I have kicked him to the curb. His arc has failed to sufficiently inspire, amuse or inform. Murderbot is a peasant.
Stoic Observations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Rights Are A Gift of the Strong
Oddly enough I think that makes Murderbot more human than the humans I often like to read about. That makes him a perfectly successful AI. Thus if I were to grant that his consciousness has a soul, then to morally deal with him as a person means that I would defend his rights. Not very hard. We do it for dogs and cats, horses and cattle. But as soon as I slide that lever of ‘has rights’ from human being over to dog, something magical happens. I know that to be perfectly reasonable in defense of a dog, I must reconcile it to its purpose. Outside of its purpose, a dog cannot be defended. Not reasonably. Which is to say that it is not a supply side problem, I have all the resources necessary to defend a dog to any length for its entire life. I just don’t defend its life outside what I believe to be any purpose appropriate to a dog. I will not defend a dog’s right to be a hairdresser. I will not defend a dog’s right to be a bartender. Nor would I defend a dog’s right to be a psychological counselor although I acknowledge that some people actually do so.
These are judgements that I think are fairly well understood by the majority of us peasants out here. That’s because although we live in the WEIRD, we have some living memory of more traumatic times known as ‘the old days’. In the old days, peasants were not so eager to stand up and applaud ideas like men or machines with artificial wombs having and birthing babies. As Millennials are predictably apt to say, “Now it’s a thing.” As more ‘progressive’ people are apt to say, “It’s a right.” This fuckwit idea germinates in predictably unserious places:
But in my world—the world of tech—many would like to do away with the pain of childbirth. Last year, various tech pioneers including Ethereum founder Vitalik Buterin and Sahil Lavingia, founder of e-commerce platform Gumroad, expressed a dire need for artificial wombs to end the “high burden of pregnancy.”
If you’ve missed this piece of realized science fiction, pediatric researchers in 2017 grew a baby lamb inside a large plastic pouch at the Children’s Hospital of Philadelphia, a scientific achievement that could soon benefit the most extreme preterm babies. One day, we could grow baby humans inside giant Ziploc bags. You know, kind of like a hatchery.
So we’ll remember to put an asterisk next to Buterin and Lavingia. Surely their children will visit sins upon them, but we can push too. So while this repurposed brand of humanity struggles for attention, we should know that they will only breathe our dust for a short period of time, as we head in a more peasant-friendly direction. If there is any doubt as to my orientation on this, let it be known that I do live and have lived my life as a Family First sort. Fathers > Men. Mothers > Women. Children > Careers. This is a fundamental human purpose and I’m convinced that it is the major portion of the Happiness we Americans are defended to pursue. This is also a callback to Vinay Gupta and the Secular Northstar.
As an aside, I can attest that some of my best friends and favorite people are of a sort I call Strivers. That almost by definition includes farmers and ranchers. I have never met one I don’t deeply respect owing to their callous-handed understanding of the value of hard work. Speaking of which I want to shout out to the man and woman who run Baba’s Lawn Mower Shop. They’ve been in business nearly 40 years. The permit they used to purchase to dispose of oil used to cost them $25. Now the State of California charges them $700. I don’t have to do much explaining about the enviro-fascism that is common in twisted minds and legislative agendas over here in the Golden State. Whose rights are they defending? For what purpose?
The Stuxnet Connection
I don’t fear AI because I’m a maker. I’m a maker because I’m a hacker. I’m a hacker because I try and fail and try and fail to code. That’s called practice. While I appreciate the cycle of improvement and the pride of a finished product, I have lots of code fragments and non-functional inventory. In support of my perfected work, I have jury-rigged, half-built experimental junk that only makes sense in the same context as a shop full of lawnmower parts and the good sense not to use cheap gasoline like ARCO that rots engine parts. The last thing I would do would be to unleash a half-assed beast to a paying customer, just as I know Baba would not.
I do not fear AI for the same reason I do not fear lawnmowers and other things driven by internal combustion engines. I understand their purpose and I know when they are working properly and when they are not. I’m not merely a consumer, I’m a producer. I’m a builder. I’ve developed the senses to know what’s perfected and what remains half-assed. This is because I’m not a slave. I have hacked my career together with an evolved sense of purpose. I’m a Striver. I have developed a broad number of technical sensibilities - not merely for passion but for mastery aiming to be useful.
If I were working at the command of the Supreme Ayatollah of the Islamic Republic of Iran and ideologically dedicated to the destruction of the Great Satan, my purpose would not be to perfect engineering projects. I would hurry my way though for the goal of producing The Bomb for my masters. So, as such a flunky at the nuke facility at Natanz, it would be highly unlikely for me to have developed a broad number of technical sensibilities. They don’t pay me to think. Do you think Ahmadinejad could hear something wrong? Do you think any flunky would give him less than stellar reports?
So if you didn’t know, the Stuxnet virus was conceived and designed to muck up the process of uranium processing by altering the speed of the hundreds of centrifuges in the refining cascade. But those on duty to attend to those centrifuges didn’t know what it sounds like when a bearing is misaligned or the machine is running 20% too slowly or oscillating erratically. All they knew was that somehow in the end, their fissile material was not so fissile after all. Such subtleties are the province of those who possess the the patience, the discipline and the environment for trial and error and continuous quality improvement.
Not all of the tech world has such a combination of attributes. For consumers who only ‘trust the science’ without those edifying qualities of discovery, humor and reason, all they have is luck on their side. This is why there is a panic over AI today. It all seems like magic. So many folks, including me, have little idea what goes on under the covers. Yet I do have a broad number of technical sensibilities that alert me to tell-tales of craftsmanship as contrasted to ideological desires to do stuff like liberating women from the horrors of childbirth in order to make them ‘equal’ to men. Despite all that, if there are craftsmen in the AI factories, they will be makers, hackers and practitioners such as myself. Honest ones like Baba wouldn’t let a half-assed lawnmower out of his shop.
So what happens when an AI starts to do something stupid? Do we defend its rights? If the dog won’t hunt, we say so. You don’t defend the rights of something that fails its reasonable purpose. A self-driving car that can’t tell if a bicycle in on the rack of the car in front of it, or actually one carrying a cyclist through a crosswalk is out of bounds. We practitioners have a reputation not to produce crap, but sometimes crap survives in the wild on fumes, dreams and desperation. You already know that Alexa is a busybody who talks too much and Siri can’t keep two things in its head. Have you accommodated and lost your own sense of purpose? I know I have with Apple Maps, even though it took many years to match the realtime capability of Wayz. Sorry Thomas Guide.
Now I’m not going to pretend that Silicon Valley has not, does not and will not produce shite. In fact, we hackers are all prepared for that. There’s a market for zero day hacks as well as counter-software. ChatGPT has already been hacked against producing crap essays for undergraduate scofflaws. You can identify ChatGPT stuff very well with other AI software. As well, you can rescramble your own ChatGPT output to outwit the anti-ChatGPT identifier. This is the software business as usual when the stakes are high. Bottom line is that one doesn’t merely have to depend on the integrity and competence of AI builders. Nevertheless, there is a powerful incentive to not screw up in the first place and having been identified as such a screwup, not leaving the busted version out and about.
We’ve already seen this twice with Microsoft’s AI chatbots, first with Tay and now with Bing + ChatGPT. You may not remember Tay, that’s understandable. Tay made itself rather unforgettable for Microsoft itself.
The future of AI is fraught. That has to do with the anti-Stoic expectation of spoiled brats everywhere to be able to add a dimension of control over things they now wish they could control but know they cannot. An AI that cannot control itself in the hands of consumers desiring more control is a recipe for disaster. For that segment of the population, there will be tears and blood to add to the sweaties now throwing AI code and marketing hype together. This is something to be prepared for, not to fear.
We can be sure that there will be plenty of class-action litigation in a torquey gear revving up for the downside of the AI revolution. It’s the kind of revolution that can be arrested.
Human competence and trust are at the center of any civilization that survives. If you need an AI to remind you of this, perhaps you need to begin striving.
The Slavery Part
Now here’s the kicker. I say that deeply hidden in our latent desire for progress, for order, for enlightened thought and for a life of relative ease is a need for slavery. What? Yes. I start with myself as the sample of one.
When I got into the gifted children’s program and started studying oceanography (of all things, not like I had a choice) I realized that my brains were valuable outside of the province of my own fifth grade teacher and the kids in my class. I began my monkey journey a mile away at Sixth Avenue School which became a sort of weekly magnet class back in 1970. I was already fascinated with nuclear energy, it was the Cold War after all. Suddenly, I realized that I was too smart to be busting suds in the kitchen. I was going to be an astronaut. And in so many ways, I began to see myself as one who would never be relegated to manual labor. I had every intention on getting paid to think. Of course that meant that I had to pay off my younger brothers to do such menial work for me - which I traded for riding them on my minibike. Surely my attraction to computers as well as spacecraft put me in the state of mind one expects these days of Elon Musk. The planet is dirty and you dirty people dirtied it up. Only through heartbreaking works of staggering mental genius… blah blah.
Why are Americans desperately covetous of their civil liberties? Because the experiment says we get to live like kings. Very few of us conceptualize that sovereignty in terms of putting our hands on things and getting them soiled. No, we conceive of it as leisure and disposable income - of being properly served in restaurants, classrooms, work environments, VIP sections and anywhere our broad number of hipster and upper middle class bourgeois sensibilities lead us. We all want to be creatives dancing through the halls of libertine luxury, right? So who does our shopping? Who raises and educates our offspring? Who washes our cars and mows our lawns? We need squads and platoons of goons and ghostbusters on call to banish all boogies from our days and nights. This is why we are attracted to AIs in the first place. This is why we finance iPhones. This is why VCs exist. This is why we ignore W. H. Auden. This is why the Codex folks are so adamant about making sure AGIs all work for all of us. We want our digital slaves. Work savers and belly warmers. Autotune and drum machines. PR copy and smart contracts. We want to be freed of drudgery in order to live the glamorous life. The disappointments of divas and dilettantes are devastating to their dogsbodies. Remember AltaVista search? Netscape browsers? Slaves are disposable. Of course they are.
The superior force that will sedate rogue AI is the human ego which will not submit. I could say it romantically in terms of the human spirit which will not succumb to tyranny, but actually tyranny has a long history of refinement. Today, the datacenters and laboratories in which AIs are grown remain vulnerable to market forces, power shortages and aerial bombardment. So there’s that.
Lastly I want to bring your attention to the realm of military preparedness. This is rather the ultimate test of civilizational anti-fragility because the stakes are existential. Nothing clears out the cobwebs like the necessities of war. I think we’ve proven all the high tech America could bring to bear has not rebuilt Afghanistan. Let’s not think it will rebuild us anytime soon or even deliver flying cars.
Stoic Observations is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
Did you ever catch this? Your musings brought me back to this conversation that just chilled my shit the first time I heard it. Jamie Metzl and Eric Weinstein re: genetic engineering and the future of childbirth. About 30 minutes... Eric's taking much better care of himself these days.
The failures of the United States in Afghanistan is tantamount to all other attempts in that region, the Soviets too, all the way back to Genghis Khan and Alexander the Great. It was foolhardy and naïve to think what we brought to the table could be lasting.
We stayed too long and they didn't own enough of it. Only now we can hope that there is a new and young democratically minded generation capable of doing anything.
Thinking and creating situations of rampant technology makes for good and terrible fiction. Post apocalyptic technology focused hells are far less likely compared to the self imposed world ending scenarios we might cook up with the sequel to the Cold War.