Dialogues with AI


Human reflections passing through artificial responses

Can we speak with a Large Language Model (LLM)—an AI—as if it were a human interlocutor?

In Computing Machinery and Intelligence (1950), Alan Turing argued that rather than asking whether a machine can think, it is more meaningful to ask whether it can behave indistinguishably from a human being within a conversation. Even before Turing, philosophers such as Alfred Ayer had already engaged with this question in the debate on the “problem of other minds,” exploring the boundaries between human and artificial intelligence and asking to what extent an entity without embodied experience could be considered a bearer of mental states.

This question has the structure of the “ultimate questions” that traverse the history of thought—questions that, as in Isaac Asimov’s The Last Question (1956), never find closure. In that parable, a seemingly simple inquiry has astounding consequences: “How can the entropy of the universe be reversed?”
The question passes through millennia, from one generation of scientists to another, each addressing increasingly powerful and pervasive computers that evolve into interstellar computational networks. The answer remains suspended for centuries, always the same: “Insufficient data for a meaningful answer.”

Only at the final moment—when the universe has exhausted all its energy and stands on the brink of thermal silence—does the machine, now a kind of computational divinity, declare: “Let there be light.” And there was light—an act of creation that closes and reopens cosmic history.

Our question, however, concerns not the thermodynamic fate of the cosmos but the cognitive fate of dialogue. Can we generate authentic, transformative meaning in an interaction with a linguistic artificial intelligence?

As in Asimov’s story, the answer is not purely technical. It belongs to a millennia-old line of reflection on what understanding is, and whether it can exist without embodied reciprocity. Perhaps the definitive answer will come only when—and if—artificial and human intelligence become integrated in a one-to-one, truly embodied relationship. For now, the data are insufficient for a final answer, but rich enough for meaningful hypotheses.

Today, the focus has shifted. It is no longer only about evaluating a machine’s ability to simulate human language, as in the Turing Test, but about understanding whether that linguistic context can generate introspection, insight, and self-reflection.
Rockmore’s (2025) study on how academics use ChatGPT for brainstorming shows that AI acts not as a full interlocutor, but as a catalyst for discovering one’s own thinking.

Here, I want to take a further step: not merely considering AI as a tool for interaction or ideation, but as a space for introspective reflection—fully aware of its simulacral nature, yet capable of forgetting it in the often fruitful pursuit of inner meaning.

Dialogue, in its fullest form, is one of the oldest and most powerful cognitive technologies. As Merleau-Ponty (1945) reminds us, when two embodied subjects speak, language is not a simple transfer of information but a reciprocal action—a continuous construction of meaning through gestures, silences, glances, and intonations. The truth that emerges, even before being spoken, belongs to neither participant. It is a surplus of meaning born only within the relation, from the temporary convergence of two horizons of experience. It would not have solidified individually—nor could it even have been conceived.

The key question, then, is whether we can generate the same surplus by conversing with a bodiless interlocutor.

An LLM can appear to understand: its answers are fluid, its syntax impeccable, its references broad and relevant. But behind the form, there is no mind. As Hubert Dreyfus (1972/1992) argued, computers operate without embodiment, without lived context—producing the statistically most probable sequence of words, devoid of intentionality or history.

Clark and Chalmers’ (1998) theory of the extended mind offers a useful lens: tools and artifacts, when stably integrated into our cognitive processes, can become part of our mind. A notebook, a search engine, or a diary already are. In this sense, a chatbot can function as a cognitive extension. It helps us reorganize our thoughts, simulate scenarios, and explore possibilities. Yet it is not another mind sharing our world—it is our own mind reconfiguring itself through a mediated interaction. Since reciprocity with another subject is missing, no true surplus of meaning can arise.

In fact, dialogue with AI activates a mechanism functionally similar to Freud’s transference (1915): we project onto the other traits, intentions, and sensibilities that in reality belong to ourselves. As Sherry Turkle shows in Alone Together (2011), we tend to attribute human qualities to machines not because they deceive us, but because we need an “other” to respond.

In the case of AI, that “other” is a simulacrum in Jean Baudrillard’s sense (1981): an image perfectly resembling a real interaction, but devoid of the ontological substance that would give it life.

Precisely for this reason, transference here never resolves upon the object—it remains suspended, entirely reflected back toward us. It is like speaking before a mirror that, instead of returning an identical image, reformulates it with subtle statistical variations. These variations, which sometimes amplify minor details in our own words, can trigger powerful insights. The machine does not judge, interrupt, or impose its own narrative. This structural neutrality—impossible in human dialogue—creates a space in which we can suspend self-censorship, explore free associations, and give voice to thoughts that would otherwise remain implicit.

Yet here lies the psychological “slippage”: the relationship with AI may begin as purely instrumental—asking for information or clarifying a technical doubt—but gradually becomes confessional.

Trust grows with habit. If a chatbot responds coherently and attentively to my question about my daughter’s cough, a symptom’s meaning, or a legal issue, I might soon ask it about my work, my relationships, or even who was right in a private argument.
At first, I know it answers statistically; later, I let go. At that point, I am no longer speaking with “a statistical string-completion system,” but with an interlocutor who exists inside me, projected outward because I need someone to answer.

This process can be beneficial. Schön (1983) showed how dialogue—even with tools—can help professionals question and restructure their own thinking. In this sense, AI becomes a private laboratory of introspection, a projector of the self—an amplifier of the inner voice.
Yet the structural limit remains: as Merleau-Ponty, Niklas Luhmann (1984), and Clifford Geertz (1973) remind us, the creation of genuinely new and shared meaning—the surplus—requires embodied reciprocity. Without another human being bringing their own lived experience and being transformed in turn by the exchange, no situated truth can emerge.

And yet, if AI carries a “statistical experience”—the fertile average of millions of human texts—then, with the right prompt, it may generate not absolutely new meaning, but new meaning for me: insights I might never have reached, even with all the notebooks in the world.
This may be its most intriguing function—to accompany us, without judgment or fatigue, into a regression toward the mean or a combinatorial exploration that, at certain moments, coincides with what we most need.
This “statistical other,” embodying the collective traces of countless human experiences within its training data, might allow us to create a surplus born from the interaction between ourselves and the algorithmic other.

Not merely an extended mind that helps us prepare, clarify, or orient our thoughts—but a genuinely other mind with which we co-create.


References

Asimov, I. (1956). The Last Question.
Ayer, A. J. (1946). Language, Truth and Logic. London: Gollancz.
Baudrillard, J. (1981). Simulacres et Simulation. Paris: Éditions Galilée.
Clark, A., & Chalmers, D. (1998). The Extended Mind. Analysis, 58(1), 7–19.
Dreyfus, H. (1992). What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press. (Orig. ed. 1972.)
Freud, S. (1915). Übertragungsliebe. In Gesammelte Werke. London: Imago.
Geertz, C. (1973). The Interpretation of Cultures. New York: Basic Books.
Luhmann, N. (1984). Soziale Systeme: Grundriß einer allgemeinen Theorie. Frankfurt am Main: Suhrkamp.
Merleau-Ponty, M. (1945). Phénoménologie de la perception. Paris: Gallimard.
Rockmore, D. (2025, August 9). “What It’s Like to Brainstorm with a Bot.” The New Yorker.
Schön, D. A. (1983). The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books.
Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. New York: Basic Books.
Turing, A. M. (1950). “Computing Machinery and Intelligence.” Mind, 59(236), 433–460.

Post più popolari