What if the entire internet is wrong? What does it mean, therefore, if all of the information AI models get from the internet is faulty except for generalized rules of natural language grammar, and Markov chain generated text in a little bit of self-correction? It means you can build very polite corpo-speak bots that more or less confirm the conventional wisdom of literate humans. In other words, something better categorically than a search engine but a couple of hugs and decades short of a trusted friend. Welcome to right now, because that’s what we have already.
Let me give you the bottom line right up top before I delve into consequences that I think will show up later. The bottom line borrows directly from a line of reasoning I just heard.
You don’t get superhuman performance be doing better imitation learning on human data.
This is another statement of the difference between selection effects and treatment effects. For those of you who forgot, selection effects are why Harvard grads get good jobs. Harvard selects the top highschool students and throws professors at them. The grinders grind, and nobody blames you for hiring Harvard grads. On the other hand, treatment effects are why Army Rangers are world-class. The Army trains the hell out of good soldiers and makes them that way. Harvard frosh are found. Army Rangers are made.
Analogously, now that we are in the middle of the AI wars for monopoly power in the next generation of computing, very much like the UNIX wars of yesteryear, some AI companies are benchmarking their products the same way. Some are based on what is called Human Reinforcement Learning (selection effects) and others are making their AIs talk to themselves and second guess their own thinking and think harder (treatment effects). In other words, some AIs will be talking head news anchors, others will be original thinkers. Obviously Google’s recent Woke AI was the former, and perhaps this new thing called Q* is the latter. But essentially, all of the billionaire rulers and there hired geniuses know the difference. The open question is what will we peasants buy? Will we even be able to tell which is which? Will we care? After all, isn’t the purpose of AI to do the grunt work so that we can all do underwater basketweaving and interpretive dance?
No.
The danger point here is that these founders and funders are on a mad dash to grab every iota of language from every digitized source possible and use that corpus to fuel the fundamental intelligence of their AI model. I know this because a project I am helping to fund had to spend extra bux to get a fairly well-known Russian corpus integrated into the standard Wikipedia stuff. The more your AI knows, the more you pay for corpus access rights.
But I was not in the Business Intelligence game for 30 years without understanding something fundamental about digital representation of knowledge some of which is known as the GIGO rule. Garbage in, garbage out. If your data is pig shit, no amount of spinning brain graphics, 3D bar charts, zooming mathematical formulas or polite politically correct language is quite enough lipstick and Lysol. That’s why CIA disinformation campaigns work. They know where to inject the fear, uncertainty and doubt into the river of facts. Deception is a fundamental enabler of the intelligence business. It’s why political dirty tricks work. It’s why negative campaigns work in electoral politics. That’s why 25,000 radical students and their enablers can dominate the news cycle without three dozen of them having command of the facts or any ethical wisdom. That’s why millions of American are directed to pay attention to optics.
The postmodern dream is for optics to matter more than facts. Decide how you feel before you conclude what you think.
So I reiterate. If an AI is smarter than some of your friends, when do you start trusting it more than your friends?
The Private Corpus
When you are in a business on which lives depend and it is a mature business, like air traffic control or electrical power distribution, there are very specific kinds of intelligence that you need to absorb and understand. I’m sure that there are hundreds of specialties like this. Federal judges and pharmacists. There are some things humans do well, and there are some things computers do well. However when there is a private corpus of critical knowledge, humans will have to spend years absorbing and understanding. That’s why medical schools are expensive and take lots of time to develop human intellectual muscle memory. The AI guys believe they can do selection effects and assemble the right corpus and then crank out medical advice. Data center sized supercomputers do not learn by doing, they learn by simulating doing. They spit out words, and surely will also spit out actual muscle memory commands to humanoid robots.
You have to run a 10.1 100m to get the attention of a legendary track coach like Bob Kersee. Then you get to his private corpus of knowledge. What do you do to get to the private corpus of Microsoft Bing? What do you do to get exposure to the private corpus behind the CIA World Fact Book? You don’t. You never will. You will only pay for the privilege of speaking to the AIs you trust. You won’t know what kind they are (selection or treatment) unless you build one yourself. Guess what, human? You will always be slower.
Common Sense
There will always be human beings who are dumber and slower than you. Yes, I mean you. Illiterates won’t read this, and they won’t have AIs to read it aloud to them at their Flesch-Kincaid level until the price for AI production is really cheap, like Elon’s reusable rockets: long after the triumph of monopoly. There will also always be humans who are more attractive, smarter, more popular and more corruptible than you. You will foolishly give them your attention and some measure of your trust, you silly goose. You won’t likely have AIs to track them and debunk their blather for you. Well, you might if I get influential enough to have such an AI built for a reasonable price. So you will have to make decisions with partial information of wobbly integrity, which has always been the case and will continue to be the case.
Today, you can recognize an idiot driver in the lane to the right of you who is drunk driving. You will not be able to recognize a drunk AI or one trained on a corrupted corpus. Nor will you be able to audit the corpuses of those for consumer use. Is Good Housekeeping still around? Consumers Union? Underwriters Labs?
Will common sense be good enough? Yes, with people. But to what extent are people for whom common sense works going to be the people we can work with? What if they are second-guessing you presuming that you are second-guessing them? Can you have a reasonable conversation? No. We don’t have manners. Our AIs have manners. Our agents and representatives have protocols. We’re kinda unleashed and a little bit crazy, right? We don’t know how to deal with asymmetric conflict. Is that person dumber and slower than me? Is that person more attractive and smarter than me?
Nobody gets you. You will have to deal. You will ask for AI help. Consider the AI generated picture of the librarian. Is she really behind bars or is she reaching out to help? She’s in both places at once. All we know is we can’t get to the corpus. We can’t know the source of the AI’s knowledge. We’re in that strange position of having to take a machine at its word. Do you really know what is hooked up to the ‘engine light’ in your car? Will your car kill you?
The Stoic Angle
The Classics are everything that is not locked down by the industrious capitalists who charge for knowledge. The rest may or may not be poisoned with ideology and idolatry. Perhaps the last real scholarship was done in the days of Einstein. That’s what Eric Weinstein says. I keep buying old books for cheap, and I stay away from new music. My bets are hedged with HxA and today’s unconventional thinkers as well as the scholars of the Greatest Generation.
Focus on development of your abilities without lying to yourself.
Recognize that character & skill development in other individuals.
Make alliances with them and build trust.
Use the Lindy Effect to your advantage.
Good times and bad times will come and go.
We will have different kinds of trust issues with humans and will machines. It’s important that we sort these issues out in a dispassionate and ethical manner. Consider the source.