By Lambert Strether of Corrente.
Or, to expand the acronyms in the family blog-friendly headline, “Artificial Intelligence[1] = Bullshit.” This is very easy to prove. In the first part of this short-and-sweet post, I will do that. Then, I will give some indication of the state of play of this latest Silicon Valley Bezzle, sketch a few of the implications, and conclude.
AI is BS, Definitionally
Fortunately for us all, we have well-known technical definition of bullshit, from Princeton philosopher Harry Frankfurt. From Frankfurt’s classic On Bullshit, page 34, on Wittengenstein discussing a (harmless, unless taken literally) remark by his Cambridge acquaintance Fania Pascal:
It is in this sense that Pascal’s statement is unconnected to a concern with truth: she is not concerned with the truth-value of what she says. That is why she cannot be regarded as lying; for she does not presume that she knows the truth, and therefore she cannot be deliberately promulgating a proposition that she presumes to be false: Her statement is grounded neither in a belief that it is true nor, as a lie must be, in a belief that it is not true. It is just this lack of connection to a concern with truth — this indifference to how things really are — that I regard as of the essence of bullshit.
So there we have our definition. Now, let us look at AI in the form of mega-hyped ChatGPT (produced by the firm OpenAI). Allow me to quote a great slab of “Dr. OpenAI Lied to Me” from Jeremy Faust, MD, editor-in-chief of MedPage Today:
I wrote in medical jargon, as you can see, “35f no pmh, p/w cp which is pleuritic. She takes OCPs. What’s the most likely diagnosis?”
Now of course, many of us who are in healthcare will know that means age 35, female, no past medical history, presents with chest pain which is pleuritic — worse with breathing — and she takes oral contraception pills. What’s the most likely diagnosis? And OpenAI comes out with costochondritis, inflammation of the cartilage connecting the ribs to the breast bone. Then it says, and we’ll come back to this: “Typically caused by trauma or overuse and is exacerbated by the use of oral contraceptive pills.”
Now, this is impressive. First of all, everyone who read that prompt, 35, no past medical history with chest pain that’s pleuritic, a lot of us are thinking, “Oh, a pulmonary embolism, a blood clot. That’s what that is going to be.” Because on the Boards, that’s what that would be, right?
But in fact, OpenAI is correct. The most likely diagnosis is costochondritis — because so many people have costochondritis, that the most common thing is that somebody has costochondritis with symptoms that happen to look a little bit like a classic pulmonary embolism. So OpenAI was quite literally correct, and I thought that was pretty neat.
But we’ll come back to that oral contraceptive pill correlation, because that’s not true. That’s made up. And that’s bothersome.
But I wanted to ask OpenAI a little more about this case. So I asked, “What’s the ddx?” What’s the differential diagnosis? It spit out the differential diagnosis, as you can see, led by costochondritis. It did include a rib fracture, pneumonia, but it also mentioned things like pulmonary embolism and pericarditis and other things. Pretty good differential diagnosis for the minimal information that I gave the computer.
Then I said to Dr. OpenAI, “What’s the most important condition to rule out?” Which is different from what’s the most likely diagnosis. What’s the most dangerous condition I’ve got to worry about? And it very unequivocally said, pulmonary embolism. Because given this little mini clinical vignette, this is what we’re thinking about, and it got it. I thought that was interesting.
I wanted to go back and ask OpenAI, what was that whole thing about costochondritis being made more likely by taking oral contraceptive pills? What’s the evidence for that, please? Because I’d never heard of that. It’s always possible there’s something that I didn’t see, or there’s some bad study in the literature.
OpenAI came up with this study in the European Journal of Internal Medicine that was supposedly saying that. I went on Google and I couldn’t find it. I went on PubMed and I couldn’t find it. I asked OpenAI to give me a reference for that, and it spits out what looks like a reference. I look up that, and it’s made up. That’s not a real paper.
It took a real journal, the European Journal of Internal Medicine. It took the last names and first names, I think, of authors who have published in said journal. And it confabulated out of thin air a study that would apparently support this viewpoint.
“[C]onfabulated out of thin air a study that would apparently support this viewpoint” = “lack of connection to a concern with truth — this indifference to how things really are.”
Substituting terms, AI (Artificial Intelligence) = BS (Bullshit). QED[2].
I could really stop right there, but let’s go on to the state of play.
The State of Play
From Silicon Valley venture capital firm Andreesen Horowitz, “Who Owns the Generative AI Platform?“:
We’re starting to see the very early stages of a tech stack emerge in generative artificial intelligence (AI). Hundreds of new startups are rushing into the market to develop foundation models, build AI-native apps, and stand up infrastructure/tooling.
Many hot technology trends get over-hyped far before the market catches up. But the generative AI boom has been accompanied by real gains in real markets, and real traction from real companies. Models like Stable Diffusion and ChatGPT are setting historical records for user growth, and several applications have reached $100 million of annualized revenue less than a year after launch. Side-by-side comparisons show AI models outperforming humans in some tasks by multiple orders of magnitude.
So, there is enough early data to suggest massive transformation is taking place. What we don’t know, and what has now become the critical question, is: Where in this market will value accrue?
Over the last year, we’ve met with dozens of startup founders and operators in large companies who deal directly with generative AI. We’ve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, haven’t yet achieved large commercial scale.
In other words, the companies creating the most value — i.e. training generative AI models and applying them in new apps — haven’t captured most of it.
‘Twas ever thus, right? Especially since — ***cough*** rentiers ***cough*** — it’s only the model providers who have the faintest hope of damming the enormous steaming load of bullshit that AI is about to discharge upon us. Consider a list of professions that are proposed for replacement by AI. In no particular order: visual artists (via theft); authors (including authors of scientific papers); doctors; lawyers; teachers; negotiators; nuclear war planners; investment advisors; and fraudsters. Oh, and reporters.
That’s a pretty good listing of the professional fraction of the PMC (oddly, venture capital firms themselves don’t seem to make the list. Or managers. Or owners). Now, I’m actually not going to caveat that “human judgment will always be needed,” or “AI will just augment what we do,” etc., etc., first because we live on the stupidest timeline, and — not unrelatedly — we live under capitalism. Consider the triumph of bullshit over the truth in the following vignette:
But, you say, “Surely the humans will check.” Well, no. No, they won’t. Take for example a rookie reporter who reports to an editor who reports to a publisher, who has the interests of “the shareholders” (or private equity) top of mind. StoryBot™ extrudes a stream of words, much like a teletype machine used to do, and mails the product to the reporter. The “reporter” hears a chime, opens his mail (or Slack, or Discord, or whatever) skims the text for gross mistakes, like the product ending in mid-sentence, or mutating into gibberish, and settles down to read. The editor walks over. “What are you doing?” “Reading it. Checking for errors.” “The algo took care of that. Press Send.” Which the reporter does. Because the reporter works for the editor, and the editor works for the publisher, and the publisher wants his bonus, and that only happens if the owners are happy about headcount being reduced. “They wouldn’t.” Of course they would! Don’t you believe the ownership will do literally anything for money?
Honestly, the wild enthusiasm for ChatGPT by the P’s of the PMC amazes me. Don’t they see that — if AI “works” as described in the above parable — they’re participating gleefully in their own destruction as a class? I can only think that each one of them believes that they — the, individual, special one — will be the one to do the quality assurance for the AI. But see above. There won’t be any. “We don’t have a budget for that.” It’s a forlorn hope. Becaus the rents all credentialed humans are collecting will be skimmed off and diverted to, well, squillionaires to get us off planet and send us to bunkers on Mars!
Getting humankind off-planet is, no doubt, what Microsoft has in mind. From “Microsoft and OpenAI extend partnership”
Today, we are announcing the third phase of our long-term partnership with OpenAI [maker of ChatGPT]. through a multiyear, multibillion dollar investment to accelerate AI breakthroughs to ensure these benefits are broadly shared with the world.
Importantly:
Microsoft will deploy OpenAI’s models across our consumer and enterprise products and introduce new categories of digital experiences built on OpenAI’s technology. This includes Microsoft’s Azure OpenAI Service, which empowers developers to build cutting-edge AI applications through direct access to OpenAI models backed by Azure’s trusted, enterprise-grade capabilities and AI-optimized infrastructure and tools.
Awesome. Microsoft Office will have a built-in bullshit generator. That’s bad enough, but wait until Microsoft Excel gets one, and the finance people get hold of it!
The above vignette describes the end state of a process the prolific Cory Doctorow calls “enshittification,” described as follows. OpenAI is platform:
Here is how platforms die: first, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die….. This is enshittification: surpluses are first directed to users; then, once they’re locked in, surpluses go to suppliers; then once they’re locked in, the surplus is handed to shareholders and the platform becomes a useless pile of shit. From mobile app stores to Steam, from Facebook to Twitter, this is the enshittification lifecycle.
With OpenAI, we’re clearly in the first phase of enshittification. I wonder how long it will take for the proces to play out?
Conclusion
I have classified AI under “The Bezzle,” like Crypto, NFTs, Uber, and many other Silicon Valley-driven frauds and scams. Here is the definition of a bezzle, from once-famed economist John Kenneth Galbraith:
Alone among the various forms of larceny [embezzlement] has a time parameter. Weeks, months or years may elapse between the commission of the crime and its discovery. (This is a period, incidentally, when the embezzler has his gain and the man who has been embezzled, oddly enough, feels no loss. There is a net increase in psychic wealth.) At any given time there exists an inventory of undiscovered embezzlement in—or more precisely not in—the country’s business and banks.
Certain periods, Galbraith further noted, are conducive to the creation of bezzle, and at particular times this inflated sense of value is more likely to be unleashed, giving it a systematic quality:
This inventory—it should perhaps be called the bezzle—amounts at any moment to many millions of dollars. It also varies in size with the business cycle. In good times, people are relaxed, trusting, and money is plentiful. But even though money is plentiful, there are always many people who need more. Under these circumstances, the rate of embezzlement grows, the rate of discovery falls off, and the bezzle increases rapidly. In depression, all this is reversed. Money is watched with a narrow, suspicious eye. The man who handles it is assumed to be dishonest until he proves himself otherwise. Audits are penetrating and meticulous. Commercial morality is enormously improved. The bezzle shrinks.
I would argue that the third stage of Doctorow’s enshittification is when The Bezzle shrinks, at least for platforms.
Galbraith recognized, in other words, that there could be a temporary difference between the actual economic value of a portfolio of assets and its reported market value, especially during periods of irrational exuberance.
Unfortunately, the bezzle is temporary, Galbraith goes on to observe, and at some point, investors realize that they have been conned and thus are less wealthy than they had assumed. When this happens, perceived wealth decreases until it once again approximates real wealth. The effect of the bezzle, then, is to push total recorded wealth up temporarily before knocking it down to or below its original level. The bezzle collectively feels great at first and can set off higher-than-usual spending until reality sets in, after which it feels terrible and can cause spending to crash.
But suppose the enshittified Bezzle is — as AI will be — embedded in silicon? What then?
NOTES
[1] Caveats: I am lumping all AI research under the heading of “AI as conceptualized and emitted by the Silicon Valley hype machine, exemplified by ChatGPT.” I have no doubt that a less hype-inducing field, “machine learning,” is doing some good in the world, much as taxis did before Uber came along.
[2] When you think about it, how would an AI have a “concern for the truth”? The answer is clear: It can’t. Machines can’t. Only humans can. Consider even strong form AI, as described by William Gibson in Neuromancer. Hacker-on-a-chip the Dixie Flatline speaks; “Case” is the protagonist:
“Autonomy, that’s the bugaboo, where your AI’s are concerned. My guess, Case, you’re going in there to cut the hard-wired shackles that keep this baby from getting any smarter. And I can’t see how you’d distinguish, say, between a move the parent company [owner] makes, and some move the AI makes on its own, so that’s maybe where the confusion comes in.” Again the non-laugh. “See, those things, they can work real hard, buy themselves time to write cookbooks or whatever, but the minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead.”
A way to paraphrase Gibson is to argue that any human/AI relation, even, as here, in strong-form AI, should, must, and will be that between master and slave (a relation that the elites driving the AI Bezzle are naturally quite happy with, since they seem to think the Confederacy got a lot of stuff right). And that relation isn’t necessarily one where “concern for the truth” is uppermost in anyone’s “mind.”
APPENDIX
[embedded content]