Photo by Steve Johnson on Unsplash
Back in the 1971, Herbert Simon (Nobel ’78) published an essay on the “attention economy.” It famously noted that “a wealth of information creates a poverty of attention.” He offered insights about how economic organizations (and people) needed mechanisms to receive and process large amounts of information, and then pass only the relevant portion of that information. (Simon won the Nobel prize “for his pioneering research into the decision-making process within economic organizations.”)Yaqub Chaudhary and Jonnie Penn suggest that artificial intelligence may shift the parameters of the tradeoffs between information and attention, and instead might lead to what they call an “intention economy.” They describe this prospect in “Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models,” published December 30, 2024, in a Special Issue of the
Harvard Data Science Review with papers on the theme “Grappling With the Generative AI Revolution.”From the abstract:
The rapid proliferation of large language models (LLMs) invites the possibility of a new marketplace for behavioral and psychological data that signals intent. This brief article introduces some initial features of that emerging marketplace. We survey recent efforts by tech executives to position the capture, manipulation, and commodification of human intentionality as a lucrative parallel to—and viable extension of—the now-dominant attention economy, which has bent consumer, civic, and media norms around users’ finite attention spans since the 1990s. We call this follow-on the intention economy. We characterize it in two ways. First, as competition, initially, between established tech players armed with the infrastructural and data capacities needed to vie for first-mover advantage on a new frontier of persuasive technologies. Second, as a commodification of hitherto unreachable levels of explicit and implicit data that signal intent, namely those signals borne of combining (a) hyper-personalized manipulation via LLM-based sycophancy, ingratiation, and emotional infiltration and (b) increasingly detailed categorization of online activity elicited through natural language.
This new dimension of automated persuasion draws on the unique capabilities of LLMs and generative AI more broadly, which intervene not only on what users want, but also, to cite Williams, “what they want to want” (Williams, 2018, p. 122). We demonstrate through a close reading of recent technical and critical literature (including unpublished papers from ArXiv) that such tools are already being explored to elicit, infer, collect, record, understand, forecast, and ultimately manipulate, modulate, and commodify human plans and purposes, both mundane (e.g., selecting a hotel) and profound (e.g., selecting a political candidate).
I confess that I am only partially persuaded that the “intention economy” is fundamentally new and different from the “attention economy.” The classic book by Vance Packard, The Hidden Persuaders–about how our wants and desires can be and are manipulated by business, media, and politicians–was written back in 1957. As Chaudary and Penn write: “At time of print, the intention economy is more aspiration than reality.” But here’s an example of what they have in mind:
[A] concrete example helps to illustrate how the intention economy, as a digital marketplace for commodified signals of ‘intent,’ would differ from our present-day attention economy. Today, advertisers can purchase access to users’ attention in the present (e.g., via real-time-bidding [RTB] networks like Google AdSense) or in the future (e.g., buying next month’s ad space on, say, a billboard or subway line). LLMs diversify these market forms by allowing advertisers to bid for access both in real time (e.g., ‘Have you thought about seeing Spiderman tonight?’) and against possible futures (e.g., ‘You mentioned feeling overworked, shall I book you that movie ticket we’d talked about?’). If you are reading these examples online, imagine that each was dynamically generated to match your personal behavioral traces, psychological profile, and contextual indicators. In an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximize the likelihood of achieving a given aim (e.g., to sell a film ticket). Zuboff (2019) identifies this type of personal AI ‘assistant’ as the equivalent of a “market avatar” that steers conversation in the service of platforms, advertisers, businesses, and other third parties.
In short, imagine persuasive messages that are far more individualized, in several senses. These messages could be based on a considerably wider range of data about you: where you live and work,travel patterns, family status, past purchases, past internet searches, and the like. Your personal data could then be compared with personal data of others to find statistically similar people. The messages you receive, based on how you are categorized based on your personal data, would also be phrased in the language most likely to appeal to you–again, based both on how you have personally responded in the past and how others who are statistically similar to you have responded. These messages could also be “dynamically adjusted,” meaning that instead of getting the same message over and over, you would receive an ever-changing series of messages.Chaudary and Penn recognize that some of this just sounds like better-targeted advertising, but they argue that there are “possibilities of intervening on—and commodifying—a higher order of user intentionality than that seen in the attention economy.” Perhaps the bottom line is that AI tools are already starting provide back-and-forth interactions, sometimes in the series of advertisements you see, sometimes in the form of chatbots, and sometimes even forms like providing medical advice or therapy. As these interactions multiply, it’s important to remember that AI is both a tool for you to use, and also a tool for others to use in communicating with you. In neither case is the AI your friend, with nothing but your best interests at heart.More By This Author:China’s Economic Situation: Interview With Barry Naughton
Many Capitalisms
Economists, Public Policy, And The Ideas That Are Lying Around