It is increasingly common to find articles that talk about the potential of Artificial Intelligence to predict our intentions, even before we are aware of them. This concept, called "intention economy", suggests a future where our desires are a valuable product, susceptible to being anticipated and monetized. The idea, although fascinating from a technological point of view, leaves me with a deep feeling of unease.
The premise is simple: AI agents, fed by the huge amount of data we generate in our digital interaction, learn to recognize patterns in our behavior. They analyze our search histories, our online purchases, our interactions on social networks, even our location and the time we spend on each activity. Over time, this massive analysis allows AI to anticipate our next moves, our incipient desires, even before they crystallize in our minds consciously.
Imagine the scenario: you're browsing the web, reading an article about gardening. You haven't thought about buying anything yet, but AI, based on your history of plant-related searches, the time you've spent reading the article, and the time of year, deduces that you're about to want to buy seeds or gardening tools. Suddenly, personalized ads start appearing, offering exactly what you were subconsciously starting to want. It's the intent economy in action.
The economic value of this predictive ability is undeniable. Any company would be willing to pay to know the desires of its potential customers before the competition. Anticipating demand, offering the right product at the right time, is the holy grail of marketing. And AI, with its massive analysis capacity, seems to be able to make this dream a reality.

The term "intentonomy," coined long ago by a senior executive at Meta, perfectly illustrates this trend. Intent, our not-yet-formulated desire, becomes the new currency in the digital marketplace. A market where we, the users, are both the product and the consumer.
However, this vision of the future makes me deeply uncomfortable. The idea that an external entity, however sophisticated, can predict and, to some extent, manipulate my desires is disturbing to me. Where is free will in a world where our intentions are anticipated and commercially exploited?
Beyond the commercial implications, the intention economy raises serious ethical and privacy questions. Who controls this data? How is it used? What guarantees exist to avoid manipulation and algorithmic bias? The possibility that this information falls into the wrong hands, or is used for unintended purposes, is a scenario that we cannot ignore.
The concentration of power in the hands of big tech companies, which control the data and the algorithms, is another worrying aspect. These companies, with their ability to predict and model human behavior, acquire disproportionate power over our decisions. A power that, if not properly regulated, can have unforeseeable consequences for society.
The intention economy is an emerging reality, but its transformative potential is undeniable. It is essential that, as a society, we reflect on its implications and set the necessary limits to ensure that AI is used ethically and responsibly, protecting our privacy and our right to decide freely.
Otherwise, we risk becoming mere passive consumers, our intentions mapped and monetized by opaque algorithms that escape our control. The future of our autonomy and our individual freedom is at stake.
Will anyone put limits on this? Allow me to doubt it.