Predictions are usually played in mugs (why are coffee mugs perceived as bad at games? I don’t know, but it feels vaguely discriminatory. Shot glasses are the same for backgammon) (I’m sure it would work just as well as your average coffee mug). I feel the headline is pretty accurate. Too many of these methods of building copycat AI systems are untested, unproven, and unrealized. So the question for me is why so many companies fall for the hype?
Proponents, and even those concerned about the potential for copycat AI, often say things like, “This is the worst copycat AI ever.” Of course, this means that imitative AI will be significantly improved in the future. The so-called law of scaling in AI is that the more data you add to a model, the better it performs, but it’s not a law in any meaningful sense. Gravity is a law. Try running off the edge of the cliff and see what happens. However, the scaling law is an assumption that may not be valid.
The time between ChatGPT 3 and ChaptGPT 4 was shorter than it is now, in a world without ChatGPT 5. Even though Sora has been working on it for a year, he still can’t maintain consistency from scene to scene. This is not surprising given how these systems work.
Mimic AI does not have a model of the world. Therefore, they have no way of knowing what is reasonable and what is not, what is realistic and what is not. All they can do is calculate what should come next based on what comes next in the training data. It’s not a significant technological achievement, but it’s also not intelligence in any meaningful sense. It also means that these systems have an upper limit, because eventually the computational accuracy levels off. It makes as much sense as the returns from data would decline exponentially. There are only a limited number of ways to construct valid sentences in each language, and at some point we will have a system that can successfully compute them. At that point, no further data is useful. Even worse, it increases the amount of data and computing power needed to make improvements.
And of course, the problem of hallucinations and bullshit is inherent in the system. Those who cannot understand truth from fiction do not realize that they have made up legal citations because they do not understand the actual state of legal knowledge. They can be reduced, but not eliminated. This is why so many chatbots, from tax officials to government legal aid, lie. Even with the large amount of knowledge freely available, i.e. in fields that use programming to perform repetitive tasks, these limitations limit the usefulness of imitative AI systems. Use of these systems has been shown to reduce code quality, introduce security bugs, and provide little to no expected productivity gains. All of this highlights why it’s such a bad business.
OpenAI isn’t making money, it’s losing billions of dollars. Sam Altman recently admitted that even at $200 per seat per month, OpenAI is still not profitable. There is no efficiency in training models, so there is no reason to believe that this situation will change anytime soon. To improve a model, we need to feed it more and more data and more computing power. And that’s before you get to the huge externalities, like the environmental costs, that these systems create. Companies are already starting to move away from imitative AI because the productivity gains aren’t high enough to justify the cost. Just to be clear, costs are already too low for providers to make a profit.
So why are so many companies pushing this so hard? Because capitalism is about atmosphere.
The companies that make these systems need something to convince Wall Street that they can continue the extraordinary levels of growth they’ve maintained since the last dot-com bust. Probably it’s not possible. There are few, if any, untapped markets from a technology perspective. This partly explains why we’ve been repeating so much shit from Silicon Valley. Cryptocurrency, the Metaverse, NFTS — all of Silicon Valley’s Next Big Things these days were philanthropic and not that big. They are trying to find significant growth drivers in markets they have not yet entered. Mimetic AI aims to replace human work with computer work, but so far it cannot do so well enough to justify the expense spent.
So why would a non-tech company want to participate in this fantasy? I think there’s a vibe. This is the opposite of “Nobody got fired for buying IBM!” metaphor. Wall Street really wants companies to grow and is pressuring them to do everything they can to limit employee costs. The promise of replacing humans with machines is every investor’s dream, and the pressure to do so is immense. Also, managers are humans too. They are exposed to the same desire to progress and the fear of failure. If other CEOs are complaining about this, maybe they should. Because what if there are people pushing for imitation that the AI is right? So what happens? Could I lose my job?
Business is not inherently rational. They are driven by human emotion, not logic or evidence. Mimetic AI proponents always double down because they need to deliver on their promises. Mimic AI users react with fear because they are exposed to hype that they cannot understand the truth of. That’s why we support businesses that are likely never to turn a profit. Not out of a rational belief that there will be a benefit, but out of fear of missing out. Atmosphere over logic, and good money after bad.
Want to know more weird things like this? You can subscribe to my free subscription Newsletter