Note that I will be using the term ‘shit’ rather often in this essay because I want to emphasize the Joe Theisman leg breaking nature at the lightweight end of the gore-filled retch-inducing content of intel relating to the evil deeds of mankind. This opposed to the dizzying eye roll inducing pathetic ‘crap’ generated by the TikTok Generation.
I haven’t spoken about it much here but I am really into spy shit. I’ve read Le Carre, Littel, Ludlum, Powers, Macintyre, Greaney, Coes, Herron, Matthews, Forsyth, Greene, Lynds, Kolb, Schneier… Jeez, I’m sick. And none of this includes much in the direction of military and police fictions like Martin Cruz Smith and Clancy, nor formal histories and memoirs of former officials. I discovered a long time ago that my field of Business Intelligence is very much parallel to that of of the actual intelligence services. The latter group, especially since COTS, has better tools, smarter people and more riding on their investments. They also have devious minds, huge budgets and they’re all in.
So OpenAI CEO Sam Altman seems like a nice guy. So was the guy who designed the iPhone, Jony Ive. But I’m not sure the appeal of his product, with regard to Altman’s conscientiousness and public spiritedness is going to be any match for what the boffins at MI5 will be up to. They will, of course, be playing with his and all open source code from their coverts.
I mentioned not long ago that I was looking for real world oracles. I had hoped in some way that my association with the FBI could get me access to a better class of geopolitical open source information. I had long been a subscriber to Stratfor and a reader of Thomas P.M. Barnett since the beginning of that organization, although now I cannot afford the subscription. It turns out that I won’t live long enough for shit to be declassified and authors like Ben Macintyre or Jason Matthews will have time to publish it in their inimitable styles. So I am resigned to living in the darkness of shadows produced by shadow intelligence operations. Will open artificial intelligence help? Not for a long time. Not this LLM stuff.
So really what this is all about is asking whether or not it is reasonable to assume that those who are to be the beneficiaries, victims or co-dependents with the next wave of AI tools will have any collective advantage that cannot be out done by the traditional sources of disinformation. I say we won’t any more than the masses of smartphone holders will outdo professional photographers.
The Whitney Deception
Despite the fact that I am sanguine, we should keep in mind something that happened to the cotton industry back in the bad old days. The long and short of it was that manual slave labor was to be obviated by the invention of the cotton gin. That was the aim of its inventor, Eli Whitney. In fact, this labor saving device was put to use not to save labor, but to make more grades and types of cotton profitable for harvesting. So think about it. With the next wave of LLMs that have already demonstrated themselves capable of generating undergraduate term papers, undergraduates are not going to be displaced. Only a massive amount more of that quality of essays will become available. This will crowd out some local reporters, kind of like streaming is crowding out some local radio stations. But the more difficult aspect of this will be that the glut of unmoderated uncurated info-spew of the interwebz will be greatly increased. It will be like the 1970s when every vacant lot at major intersections turned into a gas station. Why? Because everybody could afford cars, there was a demand for all the support services. But are there really that many more places to go?
Remember I said that brains are a cheap commodity, and they’re getting cheaper all the time.
Is Information Truth? Is Intelligence Truth Discovery?
What has been clear to me in my fundamental understanding of humanity is that we are at this moment possessed of more literate people on the planet than ever before. That means they will be hungry for words, knowledge and life hacks. The internet revolution has proven that the traditional communications media companies were incompetent to satisfy this hunger. They couldn’t upchuck enough semi-digested worm stew for billions of baby birds peeping madly. But from that mama bird’s eye view, those gaping gullets would take anything dropped in. That is what an intelligence officer knows. What I know is that CFOs who are drinking numbers from the fountain of a $5M system are not likely to replace it on a whim. What the public knows is that Enron execs were capable of manipulations.
This all adds up to a basic understanding I had determined, even prior to Enron’s collapse, that a lot of very intelligent and capable people are drinking from faulty fountains. You would be shocked to know how many F500 execs get the financial information from a warren of linked spreadsheets and crap systems like Sharepoint. The public now is beginning to understand the dynamics of disinformation, censorship and counterintelligence, the global scale at which the intelligence services are used to playing. It’s not quite the Great Game with Moscow Rules, but they are just as significant, especially when it comes to the matter of democratic processes. I’m also saying this as an early blogger who was in the thick of things when terms like ‘fisking’ and ‘msm’ were just being coined.
How well, how far and how long can a fantasy be disseminated through a literate population of billions? Perhaps indefinitely. After all, consider the demand. If people want to believe in Iraqi WMDs, that creates its own market of seduction for content producers. We know now that Saddam himself proposed the rumor as its own kernel of truth. Motivated reasoning is dependent on markets of information, and vice-versa. News junkies among us know this is a proven quantity at Fox News as recently divulged.
Speaking of WMDs, I am drawn at this moment to a long radio show I heard while driving from Fremont to Hayward back after 9/11. The subject was the difference between nuclear, chemical and biological weapons - how they operated and effective defenses against them. The Ambrose Bierce in me waxes nostalgic for the terrorist noose that once focused the American mind.
The Death of Romance
It’s too bad that most of the poetry we memorize in America has issued from the sullied gullets of gangsta rappers. Maybe one day I’ll bother to listen to Billie Eilish or that British redhead kid who wears vests and plays acoustic guitar. I hear he’s got skills. But on Easter I asked Siri to play ‘traditional Anglican hymns’ and came up with a serious Googlewhack. I hear that the GPTs are more subtle and discerning but Apple Music knows nothing of Charles Wesley. Should I expect any better? My current thought on the matter is that we shouldn’t but we’ll have plenty of time to think about it. For example, my experience tells me that Siri and Alexa are stupid, in preposterously obvious ways. Yet when they are correct, they are in ways that are astonishing and reliable. I have learned exactly how they are stupid. This is the new education. I know what I want from them that I cannot get. I know what I get from them that I do not want. I also know what my habits teach them and how they anticipate what I used to be. Both are a long, long way from completing my sentences.
If AI doesn’t reflect human bias, we won’t believe that it is actually intelligent, but robotic.
So here’s where I dash into an interesting corner. What I’m saying is right now people are basically spewing platitudes about AI that they were spewing about the WWW 25 years ago, but they are forgetting a couple things.
Startups wanna startup. That means they have to prove everything at scale. That means they can’t be satisfied selling their goodies to small markets. That means it’s in their interest to get as many brain cells and eyeballs activated as possible. That means the product goes to the ugly ass public. Has that business model paradigm been transcended? It is not a necessary condition to being an AI company. Let’s try not to forget Clubhouse already.
The human bias of people trying to build ethical AGI is a class of people. That class of people are a different class of people than those who have already weaponized the tools of the intelligence services. In the news is ChaosGPT whose explicit purpose is to exterminate mankind. Presumably through text, which makes it the new Loompanics. Of course AIs and AGIs will be on the Red Team.
The Americans who build and buy AGI are a different sort of people who have remarkably found various foolish things to be worthy of pursuit. This ties in deeply to Peasant Theory. America has a yuppie problem, and that’s why one class of yuppies voted for Obama (mistake) and another class of non-yuppies voted for Trump (mistake). If, like me, you see intelligence as the rate of speed and effectiveness of goal accomplishment, you understand it has nothing to do with wisdom and its opposite, foolishness. Americans will accomplish foolish things with artificially intelligent expedience.
The Man Who Mistook His LLM for a Dog
What’s impressing us today are the LLMs that are a bit recursive, meaning you take the Ask Me Anything paradigm of Auto-GPT and ask it, like a dog to fetch some information. It’s remarkable that these dogs are so articulate. But like a dog, it never comes back and asks me why. It just fetches. In fact it hunts and tracks down subtle things with remarkable precision. It approaches the level of a personal shopper, but not quite. It doesn’t know me. If I ask it question A, it doesn’t take my bias into consideration. It will answer the same way for anyone. It matters to me that my dog fetches for me.
I would hope, because I suspect that Altman has considered it, that some of these public AIs do resolve the question of the distribution of intelligent information. So according to that hope, I would like ChatGPT5 or 6 to know many more hymns from Charles Wesley. Further I would want it to know that’s just my bag for the moment. But as I think about that matter, I am made startlingly aware of one more disruptive thing we’re going to see a bit more of.
The current paradigm is for individuals to use ChatGPT to outdo Google. Also developers are figuring out ways to embed it in their individual coding. But what about teams? The example I came up with is using an LLM to communicate augmented tactics to a small group of dedicated realtors whose aim is to take over commercial real estate in Los Angeles. As I imagine that to be a straightforward and cutthroat business, given my experience with one such infamous company in NYC, I can visualize the mercenary effects.
Dogs can be vicious. People will train them to be just that.
What about teams of AIs? That’s a different kind of hunting with dogs. Make sure you’re riding a horse. Still, it won’t save you from spy shit.
Conclusions
The Stoic angle of course is to concentrate on wisdom with the grounded understanding that people will abuse tools, and that this abuse will be survivable. LLMs are just one type of AI that may contribute to AGI. Whether the existence of an AGI is a long term or short term eventuality, we can understand its human tool creators and its human tool users.
VCs are disruptive because large organizations move slowly and propagate IT tools to manage them slowly. So startups are the best answer anyone has come up with yet, to break the dominion of captured corporations and their maladroit business models.
This is what we asked for, agile companies that move fast and break things.
So perhaps a generation of people who have asked for this will decide to walk away from their screens and learn to take a punch and drink from the garden hose as we legendary GenX kids did. Facebook has taken a bath on its virtual reality fantasy, the metaverse of which was the first simple vision of Neal Stephenson. I’m a big fan of Stephenson but I guarantee his more recent books are an order of magnitude better than Snow Crash. (Meditations on Neo-Victorians to come.)
My bias is that AIs are built by indoor kids for indoor kids. Human life needs to be more outdoors, not just in Jeep Grand Cherokees. We can all be a lot less robotic.
Toldja:
https://towardsdatascience.com/how-large-language-models-changed-my-entire-osint-workflow-35960099e258
Thank you for the thought provoking piece. Do you think there are LLM’s in use by the spy agencies? If so, which one would you speculate has the best system?