We Are Spirits in the Material World
Oh what a world! What a world!
If you watch TV news I feel sorry for you. If you read the blogosphere or Substack (well of course you do), you have a headstart on remaining in the first world. The first world is about to get shrink-wrapped.
This is a long story but you’re going to have to pick it up in the middle of the end. I picked it up in November of last year in a post called Embrace the Slop. This is a play on ‘Embrace the Suck’ understood by those with the warrior mindset. At the time, I was conceding to the integration of the hallucinations of AI into my specialty of data engineering. That was then. Now, it’s literally a whole new world.
I have long been a follower of Tyler Cowen back in the days when I read his Marginal Revolution with Alex Tarrabok on a regular basis. Cowan is the most relatable and extremely smart person I’ve ever heard of. He’s brilliant and quick in exactly the way I am merely smart. I especially like him because he defies all of the stereotypes against brainy people being emotionally crippled or socially awkward. Anyway, he went full steam into AI usage and discovered that it was very helpful to him. He’s an extraordinary scholar and does deep research, and when I started using AI to impersonate historical figures, it sparked my attention because that’s precisely one of the things I wanted to do (and still do) in my Lorite Interrogator and WWID projects. So Cowen knows how to use AI as a research tool, and I’ve always wanted to have a graduate assistant. So I’m ready to go with it. Plus, and importantly so, Cowen is not easily fooled or baffled by brilliance. That’s what I love about economists and historians. They recognize how everybody alive can be wrong without it being a conspiracy theory, and understands different way we can revert to a mean without torches and pitchforks.
Coding Supremacy
So I’m aware of NLP and it was very expensive the time I was called in to spec a project for a law firm 5 years ago. Now it’s way more capable and now I understand how to build tools around LLMs (agentic development) having completed several classes over the past 6 weeks. Just prior to that I was doing some prompt engineering. I would say I just crossed over into the realm of Intermediate skills. But I still have some unique visions that I’m keeping alive that I thought I would have to code within the context of a VC funded startup. All that changed last week with this video.
TLDR on this video is that astrophysicists who write software programs to understand the universe are saying, hell, let the AI do it. Not just ordinary astrophysicists, but the ones who work where Freeman Dyson and Albert Einstein used to work.
The bottom line is this. Computers are the best at programming computers. That means basically that all software engineers need to learn higher level languages, rather than lower level languages. This is tremendous news for someone like me who never bothered to learn Java (for example) because I have been building applications with proprietary systems at a higher abstract level. Even though I understand programming languages, I never became a software engineer per se. What I understand very well are the intricacies of corporate transformations around technologies written with building blocks built on those programming languages. That would be to the level of learning what we call ‘internals’. That means I understand databases not simply because I understand SQL (a high level language) but I understand how the database works as explained by the engineers who write the database in C (a lower level language) to understand the SQL. I speak to database engineers in English. Now there is a way for AIs to understand English and translate it into SQL and C. LLMs are better than 99% of us in terms of language translation. The trick of course is to design the best internals. That is the understanding where software engineers and application guys, like me, meet in the middle. Because they always want to know what customers I deliver to are asking for, and I always want to know what new features they want to put into the product.
That’s the business, essentially, but of course there’s product marketing, support, security, pricing models and everything else it has taken to run a software company. But now all of us have to face our own shortcomings because AIs are competent and don’t need work-life balance, or sleep.
Oh, and by the way, Claude 4.6 just released this year, wrote a C compiler by itself in two weeks. That’s even lower level than what the software engineers of databases write. That’s computer science. It’s unheard of.
There is only one caveat when it comes to coding supremacy. Right now it’s too expensive for one non-rich guy. Anthropic, who make Claude, just spent about $10,000 worth of tokens for 236 hours of Claude compute time. That’s way less than you’d have to pay a team of software engineers to build a C compiler from scratch. I can’t tell you if the cost will go up or down, but what’s going on is that the entire software industry has to make sense of the increasing capability of AI software engineering and soon coming coding supremacy and the way pricing and business models have to change as AI adoption becomes more real than it already is. The costs I’m thinking about are token budgets.
Token Budgets & Human Salaries
Most of us are used to Google. It’s a verb. We all know that Google made zillions because we could ask it our dumb questions and it would give us answers. We know this video is funny because it showed how average people deal with multibillion dollar technology.
Google was always free to use in the same way television has been. It sells you advertisement. AIs are not going to be free, that is if you plan on asking it to build a C compiler, or something more sophisticated than “gross fat butthole dick poop”. The basic economics of LLMs is ‘tokens in, tokens out’. The more you ask, the more you get. The more sophisticated and structured your input, the more capable and intelligent the output, all the way to how to outlining the steps in catalytic cracking of unleaded gasoline or something on that level. Consequently, the more tokens it costs to process your prompt.
I tell people these days that they have to write in paragraphs, and ultimately this will change the way a lot of us write for machines, and ultimately for each other. It will give us a million monkeys on a million typewriters, but somebody has to feed those monkeys. That business model hasn’t been settled yet - there are dislocations in the market some of it is pricing out stuff, and some of it is pricing in stuff. The capability of AI is being demonstrated in surprisingly quick fashion, but what that means for white collar work has yet to be determined.
What’s clear to me is that the revolution that made AWS the juggernaut that it has become is about to be superseded by an even bigger revolution. And that is about what going to happen to the area of the market called ‘Software as a Service’ (SAAS). The bottom line is that if you use something like Workday or SAP or Salesforce, the news is that AI base agentic systems will knock those business models out of consideration for two essential reasons.
price per seat business models are unsustainable
customization of those applications just got doable
Think of today’s SAAS market as you would of the pre-WW2 automobile market. No factory customization, limited colors, no power windows, brakes, transmission or air conditioning. That was the job of custom coach-builders. None of that came from the factory. That’s today’s enterprise software - it comes from the factory as a standard product you pay experts to customize, if you do that at all.
So who is going to build these new AI systems? Everyone. You will build you own just by talking specs to your AI, and it will build the software you want with the quality you need. But that’s just the beginning.
All of the custom software in the world just got cheaper. Everybody just got a research assistant.
So the scary part for me is all about whether I get to build AI applications with the agentic harnesses and orchestrations that I can build, vs what Anthropic can teach a chatbot to wrangle roughly equivalent Q&A sessions out of ordinary Joes, given all of the questions Joes have asked since the last release.
What About Joe?
It’s a reasonable question that we’ll answer together this year. But I think it comes down to this. We all have choices, but all of our choices (unless we are Tyler Cowen) pretty much boil down to a narrow set of choice worlds. For example, when you go to Costco, you have a whole warehouse full of choices. Everything you need is there, but not the same as the warehouse of choices at Best Buy. Almost none of your choices at Costco overlap with the choices at Best Buy. I don’t think people even ask as much about flat screen TVs as they did 10 years ago. But nobody asks about eggs. That’s the price, you just buy them. But people still ask a lot of questions about smartphones, car insurance and healthcare when they buy those things.
Costco is sprinkled with experts, believe it or not. They are giving you free samples and you kinda want to chat with them. It’s a human thing. That’s a different kind of expert than the guy who’s telling you about the difference between an LG dryer and a Samsung dryer. Think about all the customer service you ever got. Is the quality going up or down? Would you rather talk to a human being or not? It depends. We’ll all sort that out. Used car dealership haggling? Companies like Carvana built an entire brand over eliminating that from the buying process. Politicians lying? AIs are just waiting their turn. The market will decide the way it has for everything else, just now, some particularly expensive experts are being pushed aside for the benefit of consumers. That’s where the money is, in the mass market.
So what are all the ordinary Joes going to ask their AIs, and what consequently are the AIs going to become expert at? Nobody knows.
Spirits
So what we have is an opportunity (should you decided to accept this mission) to embody more soul and spirit into the material world. I think you would be surprised to recognize how much creativity we have siphoned away from that material world with our essentialist minimalism. Why? Because we have actually not paid people to think and contribute as individuals, but in coordination with management by committee, which is generally lousy.
What the AI can bring is custom software, novel considerations of accepted best practices, rigorous auditing with a completionist attitude. AI has time to do ‘all of the things’ so you don’t have to. And now you get to ask the question if you’re actually getting paid to do ‘all of the things’. Probably not. You can now do them with your AI assistant, which maybe means you know enough to figure out a market for yourself.
The specific advantages of the dehumanization of a lot of dehumanized work is a value we already understand. Many of us are bored stiff by the drudgery of our white collar work. There’s a very big difference in the cultures of staffing positions that are extremely narrowly defined, and those that are very broad. Somebody who is the executive assistant for a Hollywood producer is very different from somebody with the exact same job duties at the DMV. So the trick is not to get shrink-wrapped away from a position that you couldn’t possibly love. So the reckoning takes place everywhere organizations decide to let AI into the door. That will not take place everywhere for everyone all at once. You will get an opportunity, as you did with Amazon, to decide if going to the mall for the mall’s sake is worth it. Going to the movie theatre for its sake despite streaming. Eating at a restaurant for its sake despite Blue Apron. Going to the Jazz club despite Spotify, or maybe because of Spotify.
There are going to be class distinctions and disruptions. There are going to be triumphalist social and system hackers that will come and go. (I’m already probably going to ditch my Open Claw post).
Ultimately, the thing to remember is that there will always be a class of decisions that you are going to be ready to abandon in order to get a short story you find credible. There’s always going to be new fonts of discovery you never before considered that are now available to you. This doesn’t change, it’s accelerated. So now you have to give more thought to your focus. I know I have. Those are my two themes this year, Focus and Beauty.
It’s Still The First World
Americans are still spoiled. We still expect to be a first world society, but now we have to do so in the presence of AIs that will check us and maybe wreck us. The direction of trust in our society is undermined by people who are disinvested in patriotism and/or isolated into marginal economic and social classes. When AI is used by clever, ethical people and life gets that much better for those on that leading edge, there is going to be an even sharper distinction between those who have useful viable knowledge and those who do not.
As there is today, one political party can decide to block an effective national ID system for its arbitrary partisan reason, an identification system that already exists in the credit bureaus, law enforcement and insurance companies of today thereby keeping it underground and out of public oversight. The kinds of information threats that exist today can be magnified. On the other hand, so can the observability. What we particularly have to watch out for is the nature and amount of power American institutions have which will be extended and enhanced by AI non-human agents that don’t sleep. This is a new thing to be aware of and it should concern us as citizens.
So now, more than a year ago, our need to be virtuous and self-realized should be at the forefront of our consciences. Brainpower is becoming an even cheaper commodity. We can no longer necessarily think our way out of new dilemmas on our own. So get your AIs to thinking with you.
Or just watch TV.



