THE DREAM OF AGI AND THE FULLY AUTOMATED ORGANIZATION
In sum, OpenAI was likely conceived as a strategic hedge by Elon Musk and Sam Altman against Google’s mounting claim to commercial AI dominance. It was positioned from the start to communicate its altruistic mission contra the for-profit objectives of the tech giants of the day, despite itself being backed by and employed by figures from the largest Silicon Valley firms and venture capital outlets. By deploying the use of “AGI” to elevate its mission beyond the stale concern of generations-old “AI,” it was able to cultivate a cultural mystique that translated into a marketing device that could be wielded to reliably generate press, bolster recruitment of top researchers, and attract investment at crucial corporate junctures.
Thanks to the investment climate and the zero-percent-interest-rate era, there was little concern over a business model being in place as OpenAI’s leadership transitioned into a for-profit—the importance of a strong narrative and the involvement of central Valley figures eclipsed the need for a plan to generate revenue. A precedent had been set by Uber and other “unicorns” that investors and consumers would tolerate long-term efforts to scale and to capture markets. OpenAI was in command of easily the most compelling story about AI, and thus when its product ChatGPT exploded in popularity, it was well poised to capitalize on its recent legacy of presenting itself as a steward of AGI, which the company was itself allegedly developing.
Early competitors like Anthropic would have to compete on grounds of “safety,” but this mattered less as the products reached cultural saturation and market interest became obvious in the generative AI product category, especially from enterprise clients. Now, with widespread industry dedication and investment, and OpenAI successfully at the center of the generative AI economy, a rush to discern what generative AI is actually capable of in a business context unfolded—and continues to unfold even now.
It appears certain that ChatGPT’s rapid, whiplash success was something of a surprise to both the company and its partners. It was not part of a meticulous product roadmap—in fact, OpenAI board member Helen Toner has revealed that the board was not even informed in advance about ChatGPT’s release; They found out, to their chagrin, on Twitter along with everyone else.77 The app quickly notched 100 million users, causing industry analysts at banks and investment firms to swoon. “In 20 years following the internet space, we cannot recall a faster ramp in a consumer internet app,” UBS analysts wrote in a February 2023 report.78
To keep up with demand—especially in server space, given the taxing compute requirements of running AI systems—OpenAI and Microsoft expanded their partnership, with the software giant pledging an additional $10 billion in exchange for the right to use OpenAI technology in its platforms and software offerings, and a cut of OpenAI’s sales.79 (To those keeping track at home, that brought the total to around $13 billion, given past investments and pledges.)
The details of this partnership have not been made public, but it is known to stake much of its value in the form of compute credits for Azure, Microsoft’s cloud compute service.80 It also requires Open AI to use Azure as its compute provider. OpenAI’s embrace of Microsoft—and vice versa—as it moved away from a research organization aimed at “protecting the world from AI” as per the early Altman-Musk iteration, signals the startup’s ambition to take a more capital-intensive approach than Musk was able to offer. There are many reasons why Microsoft might find a deal attractive, and why it’s ideally structured to be tolerant of a startup that has no immediate or obvious roads to profitability. First, since its earliest investments, OpenAI can be seen as a hedge against Google and Facebook, each of which were outspending Microsoft in-house to develop AI systems. Second, the structure of the deal (or what we know of it) ensures that Microsoft will profit from nearly any outcome. Microsoft benefits from using OpenAI’s technology in its products, and receives a cut of revenue from the products OpenAI sells as well—buffering any financial risk, which is already curbed because, as noted above, much of the deal is understood to take the form of compute credits. As such, as recently as the summer of 2024, Microsoft was able to publicly state that it was comfortable with Sam Altman’s vision of an incipient AI, and was not afraid to spend billions investing with no promise of profits in the near term—the deal’s structure, and the dream of AGI, are in ideal symbiosis.
In 2023, after the OpenAI-Microsoft deal was inked, interest in OpenAI, generative AI, and large language models truly exploded. Microsoft added GPT technology to Bing and Copilot; Google announced Bard, which later became Gemini; and Anthropic debuted Claude, its ChatGPT competitor, in March 2023.81 (In Q3 2024, half of all venture capital investment in the US went to AI, up from 15 percent in 2017.82)
OpenAI’s first major move to shape a business model was the obvious one: It opened a premium paid tier for ChatGPT called Plus, offering higher performance for fans and superusers of the app. OpenAI also started granting access to its API, allowing developers and companies to purchase API keys. Within months, the company began leaning into enterprise sales, recognizing the potential in touting an all-powerful technology that could also automate jobs. That spring, OpenAI employees coauthored a report with Cornell researchers about job impacts;83 While the paper was covered in the press as a warning, it had the effect, as many such impact papers do, of bolstering the automation technology’s job-replacing bona fides.
Throughout the rest of 2023, OpenAI floated other ideas for generating revenue, some with more conviction than others. It teased and eventually released an app store for GPT apps developed by independent programmers, as well as the AI video generator Sora. Altman nodded to the idea of selling ads against GPT output online. But the chief potential for revenue generation, and thus the core plank of its business model, remained tied to the same plank: selling enterprise-tier GPTs and selling API access. Now we might see why the AGI mythology is so important to the formulation here: Managers can be assured that they are purchasing both the safest and the most powerful technology available as they seek to cut costs in a tight labor market.
This is similar to the way in which the previous era of tech unicorns—Uber and the gig economy startups—ultimately tried to attain profit by promising to reduce labor costs (in ridehail’s case, by disrupting the taxi and black cab economy) despite the buzzy rhetoric and lofty corporate mythologies. In this case, the talk of AGI supersedes discussions of OpenAI’s enterprise business in the press, making it a more palatable automation company. Reduce tasks, use as leverage, replace jobs, increase attrition.
Nearly two years into the generative AI boom, this still holds true: Enterprise clients, by the company’s own estimation, are the most valuable line of its business by a considerable margin. (OpenAI’s sales figures aren’t public, but many of the major contracts inked so far are.) The largest client of OpenAI is the consulting and financial services firm PwC, which has purchased 101,000 seats.84 The American biotech firm Moderna,85 and Klarna, a Swedish fintech startup, are among other leading users; OpenAI estimated 600,000 total enterprise clients as of the spring of 2024.86 By September, it reported one million.87 In June 2024, the company self-reported its annualized revenue as $3.4 billion.88
Yet OpenAI’s overhead is staggering, making it notably different from the ‘zero price’ products that have otherwise dominated the modern tech industry. Last summer, the company’s compute costs were reported to be $700,000 a day;89 they are certainly much larger now. Sam Altman publicly said he needs $7 trillion in investment for chips to run what he had planned for his AI programs.90 Meanwhile, OpenAI’s workforce is large, expensive, and expanding. Research and development for LLMs requires vast investment, compute, and energy—especially as OpenAI pushes into video production. Licensing deals with media companies for training data have cost the company hundreds of millions of dollars.91 Multiple legal battles, against journalists and creative workers who allege OpenAI has used their work without consent or compensation, are ongoing. And competitors are eating into the company’s market share.
Taken together, we see a portrait of a company that wrapped itself in an altruistic narrative mythology to attract researchers, investment, and press. It stumbled into a hit app that opened a pathway to a new product category in commercial generative AI (something Silicon Valley had been pursuing unsuccessfully for years), ignited a gold rush, drew competitors, and wielded its unique legacy and relationship to AGI to differentiate itself. However, given that generative AI technology is so expensive to develop and run, a unique imperative to generate revenue—lots of revenue—in order to capitalize on its popularity, cultural cachet, and market opportunity has become the company’s dominant concern. (In the past, as mentioned earlier, OpenAI has stated that its move away from nonprofit status was necessitated by the need for more compute power if it were to make satisfactory progress in creating AGI. This can be seen instead as a move toward preparing for an era of commercial product releases, even if the company remained unprepared for the success of ChatGPT when it arrived.) This is how a company transforms from a nonprofit whose aim is to be “owned by all of humanity” and “free of the profit motive” to one whose board is purged of safety experts in favor of Larry Summers.

This frenzy to locate and craft a viable business model has had other consequences. Ongoing and highly unresolved issues regarding copyright threaten the foundation of the industry: If content currently used in AI training models is found to be subject to copyright claims, top VCs investing in AI like Marc Andreessen say it could destroy the nascent industry. Governments, citizens, and civil society advocates have had little time to prepare adequate policies for mitigating misinformation, AI biases, and economic disruptions caused by AI. Furthermore, the haphazard nature of the AI industry’s rise means that by all appearances, another tech bubble is being rapidly inflated.92 So much investment has flowed into the space so fast, based on the bona fides of just a handful of companies—especially OpenAI, Microsoft, Google, and Nvidia—that a crash could prove highly disruptive, and have a ripple effect far beyond Silicon Valley.
This is especially concerning because so much of the business, as we have seen, relies on large enterprise contracts and on using generative AI in a capacity as automation software for creative labor. Given the unreliability, hallucinatory output, and security concerns still posed by even enterprise editions of the software, this makes the long-term gambit of generative AI as efficiency-generating, cost-cutting, and productivity-improving software a deeply dubious one. Goldman Sachs and Sequoia Capital have both authored reports suggesting that generative AI may not be worth the current investment. Sequoia Capital partner David Cahn wrote that the generative AI industry would have to generate $600 billion in revenue annually to sustain the current rate of investment—a long way to go when the biggest company in the space is making $3.4 billion a year.93 Goldman Sachs was even more blunt, finding that there was simply too much spend and too little benefit to justify the technology in most corporate environments.94 The Wall Street Journal reported in July 2024 that firms had switched from discussing generative AI in terms of productivity gains—perhaps in part because those gains were dubious or hard to quantify—to analyzing its capacity for revenue generation.95 If that doesn’t pan out—if companies find generative AI is coming up short on revenue—it’s easy to see a mass dismissal of the technology among corporate firms after the trial periods expire and the hype wears off.
Given that generative AI has entered a new and crucial stage—with large client acquisitions and investor concerns colliding, critics’ voices growing, no new major technology advancements released for months, and a howling imperative to start making money—perhaps it’s no surprise that, once again, OpenAI turned to centering its AGI narrative. In July 2024, OpenAI announced a five-level system for evaluating its technologies on the road to human-level intelligence.96 According to Bloomberg, “the tiers, which OpenAI plans to share with investors and others outside the company, range from the kind of AI available today that can interact in conversational language with people (Level 1) to AI that can do the work of an organization (Level 5).”97 Notably, OpenAI is still at Level 2—its AI is a “reasoner” that can do “human-level problem solving.”
This positioning can be viewed as a resetting of expectations to corporate clients who might be getting itchy after seeing a few months with enterprise GPT fail to provide impressive results, a reminder that AGI is still coming, that the systems are improving all the time—so don’t cancel your contract with us just yet—and a framework through which the company can whet the appetite of future clients. (Note that Altman has had to recalibrate his AGI expectations before; in January 2024, he insisted that AGI is coming, but “will change the world much less than we think,”98 perhaps another effort to tamp down expectations set by, well, himself, just a few months earlier, as corporate clients began to express frustration with the slow gains made by enterprise AI.)
When AGI arrives, OpenAI promises, its AI won’t just be able to do a single worker’s job—it will be able to do all of their jobs. The generative AI on offer at OpenAI’s Level 5 is an “AI that can do the work of an organization.”99 And that, ultimately, is what executives and management are pursuing, and have been pursuing, since the industrial revolution, when early entrepreneurs dreamt up the first fully mechanized factories—completely automated operations producing output for the profit of the one at the helm, without the protests and inefficiencies of a human workforce. In fact, what AGI—artificial general intelligence that can carry out human-level task-making free of human inefficiencies—is to AI researchers, the fully automated factory is to C-suite executives and management consultancies: an ultimate ideal that may or may not be achievable, and yet serves as a powerful driver of interest and investment.
This, perhaps, is also why the apocalyptic nature of the hype around generative AI or a prospective AGI does not seem to have bothered many corporate clients, and why it has instead proven to be such an effective marketing tool. It is in many ways the same dream: a “highly autonomous system that outperforms humans at most economically valuable work.”100 As it turns out, Sam Altman didn’t even have to build his AGI to ask it how it might generate revenue—it was clear from the start.