
(Image courtesy of Shutterstock AI Generator)
CAMBRIDGE, England — Imagine scrolling through your social media feed when an AI assistant says to you: Ready to book that beach vacation you’ve been thinking about? ” The creepy thing is not that you are aware of being sad, but that you predicted the desire for a beach vacation before you consciously formed that thought. Some experts believe that in the not-too-distant future, the consumer lifestyle will become known as the “intention economy.”
A new paper by researchers at the Leverhulme Center for Future Intelligence at the University of Cambridge shows how large-scale language models (LLMs) like ChatGPT will not only change the way we interact with technology, but also lay the foundations for new markets we tap into. I’m warning you that you’re building. Intent can become a commodity to be bought and sold.
“With enormous resources being spent to position AI assistants in every area of life, the question should arise: whose interests and objectives are these so-called assistants designed to serve?” said co-author Dr. Yakub Chaudhary. The center said in a statement:
For decades, technology companies have profited from the so-called attention economy, where our glances and clicks are currency. Social media platforms and websites compete for our limited attention spans and provide us with an endless stream of content and advertisements. But according to researchers Chaudhary and Dr. Johnny Penn, we are witnessing early signs of something potentially more invasive. It is an economic system that captures and trades our motives and plans as valuable data.
What is particularly concerning about this potential new economy is its intimate nature. “What people say during conversations, how they say them, and what inferences are made in real time as a result become much more intimate than mere records of online interactions. ” explains Chaudhary.

We are already seeing early signs of this emerging market. Apple’s new Siri developer framework, App Intents, includes protocols for “predicting actions someone might take in the future” and suggesting apps based on those predictions. OpenAI openly seeks “data that expresses human intent across any language, topic, or format.” Meanwhile, Meta is researching “intentnomy” and developing datasets to understand human intent.
Consider Meta’s AI system CICERO. CICERO achieved human-level performance in the strategy game Diplomacy by predicting player intentions and engaging in persuasive dialogue. Although currently limited to games, this technology shows the potential for AI systems to understand and influence human intent through natural conversations.
Leading technology companies are positioning themselves for this potential future. Microsoft is partnering with OpenAI on what researchers describe as “the largest infrastructure build in human history,” investing more than $50 billion annually starting in 2024. Researchers suggest that future AI assistants could provide unprecedented access to psychological and behavioral data, often collected through casual conversations.
Unless regulated, the researchers say, this evolving intention economy “will treat human motivation as a new currency,” creating a “gold rush for those who target, guide, and sell human intentions.” ”. This is not just about selling products, it can impact democracy itself, affecting everything from consumer choices to voting behavior.
The targets of the intention economy can extend far beyond vacation planning and shopping habits. The researchers say that before we become victims of unintended consequences, we need to consider the possible impact on human aspirations such as free and fair elections, a free press, and fair market competition. claims.
Perhaps the most unsettling aspect of the intention economy is not its ability to predict our choices, but that it can subtly guide our choices. As AI assistants become more sophisticated in anticipating our needs, we must ask ourselves: In a world where our intentions have become commodities, how many of our choices are truly our own?
Paper summary
methodology
Researchers conducted a comprehensive analysis of company announcements, technical literature, and the latest research on large-scale language models to identify patterns that suggest the development of an intention economy. They examined statements from key figures in the technology industry, analyzed research papers (including unpublished work on ArXiv), and studied the technical capabilities of systems such as Meta’s CICERO and various LLM applications.
result
The study found clear evidence that major technology companies are willing to capture and monetize user intent through LLM. They identified specific technological developments that will enable this change, including improvements in natural language processing, psychological profiling capabilities, and infrastructure investments. The study also found that companies are already developing tools to circumvent traditional privacy protections.
Restrictions
The researchers acknowledge that many of their observations are based on emerging trends and company statements rather than long-term empirical data. Additionally, some of the research papers they cite are still under peer review. The full impact of these technologies is still open to speculation.
Discussion and key points
This paper argues that the intention economy represents an important evolution beyond the attention economy and could have far-reaching implications for privacy, autonomy, and democracy. The researchers emphasize the need for continued academic, civic, and regulatory oversight of these developments. They particularly highlight the risks of large-scale personalized persuasion and the potential for manipulation of democratic processes.
Funding and disclosure
The research was carried out at the Leverhulme Center for Future Intelligence at the University of Cambridge. The authors declared that they have no conflicts of interest, financial or non-financial.