One Friday in March, as usual in the mornings, I was drinking my coffee in bed and reading my morning paper. My morning coffee is a double espresso, 40gr extractions out of 20gr light roasted beans. Exceptionally, that morning, it was 47 grams. By "morning paper," I mean Substack. It feels like paper in more ways than one since I'm reading it on an e-ink device.
Browsing through the notes, I stumbled upon a note from
, an AI researcher and professor at the Santa Fe Institute for complexity sciences. She writes an excellent publication here on Substack called . That note was not referring to her Substack posts but to an article she had written for Science journal. It was about the reasoning claims of the new LLM, now grown into LRM (large reasoning models), and the risks for the false feeling of trust they create.I grabbed my phone, opened the article there, and copied the prompt she started the article with. Indeed, ChatGPT gave a wrong answer. Then tried with "Reason" on but no change. I made a screenshot and posted it on LinkedIn.
I thought this one wouldn't stand out in a torrent of similar posts. It will barely be noticed, and the feeds will flush it away.
I was wrong.
I started getting notifications on my phone. All from LinkedIn. While I have notifications from most apps disabled, for some reason, I kept those from LinkedIn enabled. It seemed that my casual post was getting an unusually high amount of reactions, so I responded to a few comments and then stopped the notifications.
In a week, this offhand post got over 33 thousand impressions (38 thousand by the time of writing) and many reactions and comments.
I'm not popular on LinkedIn. I'm neither looking nor offering employment. I used to enjoy it as a social media, but it has seriously deteriorated in the last few years. It is now cringe-worthy. So, I don't post there as much as on other social media. At any rate, the majority of my posts never reach a thousand impressions. There was only one "viral" post that was lucky to get ten thousand impressions. That's by far more than any other post before. Yet that "viral" for my standards post got three times fewer impressions for nine months than the ChatGPT phone screenshot in nine days.
Here, I'm not interested in what makes a post viral. When it comes to re-inforcing spread, the network effects are well-studied. And it's clear that in the attention economy, valuable and engaging rarely overlap.
What this LinkedIn response made me realize is that the response to my prompt on ChatGPT was in many ways similar to the response on LinkedIn.
Both systems reacted to my prompt (post) as two non-human agents.
You might say that's incorrect. And incorrect for either of them. Indeed, there are now initial claims of Agentic AI, but that definitely doesn't apply to the current 01, o3, o4, or o4.5 models. So, for ChatGPT, non-human, yes; agentic, no. For LinkedIn, even if some comments are from bots (unlikely), most are from humans. Agentic, yes; non-human, no.
Indeed. And yet...
Bear with me.
Artificial, Intelligence, Agent
Now, quickly, let's get something out of the way: "Artificial Intelligence" is a misnomer. And so is "Agentic AI."
First, human intelligence is always already artificial. Once humans became bipedal, that was beneficial for enhanced food-gathering capabilities, energy efficiency, improved distance vision over grasslands, and better heat regulation. But above all, this freed the hands so they could make and use tools. Through interaction with tools, humans became intelligent.
Second, intelligence is a systemic property. It emerges through interactions. It cannot be associated with any individual entity, natural or artificial.
This view is unpopular and will remain so. The reason we have the contemporary notion of human/artificial intelligence can be traced back to the original and stabilization of the nature/culture and nature/nurture divide on the one hand and the more recent swapping of metaphors between computer and cognitive science.1
Now, there is a lot of talk about Agentic AI as the next step in the evolution of LLM and LRM. That's another impossibility since the agency requires autonomy, which needs operational closure.
I can expand on how "Artificial Intelligence" or "Agentic AI" are misnomers, but that would only be worth it if I thought it was a problem. It’s not. Misuse of language is a feature, not a bug. We use, misuse, and abuse language and use it to argue about all four. So, "AI" should stay since often, when we say "AI," we know what we are talking about, and AI, as every acronym, takes a life of its own, unchained from its expansion.
But here's what I find interesting: while the current AI has no autonomy or agency and is not likely to get one (as long as it is on the path of LLM, RAG, LRM and suchlike), interacting with it, it creates a system which if not agency, has at least some autonomy. It behaves as if it has a mind of its own.
A Mind of Its Own
When we interact, an independent, self-sustained system emerges from our interactions, which appears to have intentions that may differ from ours.
We don't even need to speak to give rise to that.
A common situation is when you want to pass another person coming towards you in a narrow corridor. You take a step to the right and the other to the left, and you block each other. To fix that, you go now to the left, and almost at the same time, the other person has moved to their right. That lasts until the synchronicity of this oscillation is broken. That coordination, which we culturally perceive as miscoordination, influences and is influenced by you and the other person do. As long as it lasts, it has intentions that are different than yours or those of the other person.
That system is transient. But there are self-sustained systems that last longer. And for some, we don't even need to interact with other people. One such thing is habits. Another is emotions. Yes, habits and emotions have a mind of their own.
Habits maintain the conditions of their persistence. They "care" for their viability, which often threatens the viability of their host through the actions of which they exist and influence the actions to maintain their existence.2
Emotions and moods have similar self-sustaining dynamics through body and environment interactions.3 Anger leads to behaviors that reinforce it. A depressive mood (which, according to Steven Wright, is merely anger without enthusiasm) can do the same.
Another example of systems that exhibit autonomy is pathological states such as anxiety. Avoiding social situations in anxiety can reinforce fear, creating a self-perpetuating cycle.
Autonomous systems emerge out of interactions with other people, from our own actions and thoughts and when we use tools. But which tools?
Almost any, but to a different extent. With some, like when playing video games,4 it's more prominent. It can be observed when using tools for Personal Knowledge Management, especially those based on Personal Knowledge Graphs.5 And, of course, when interacting with AI chatbots.
Such non-human systems are also all social systems: families, clubs, communities, companies, social networks, and societies.
But aren't social systems made up of humans and their interactions? Would they even exist with humans? Niklas Luhmann gave a negative answer to both questions.6 Social systems are non-human self-referential systems structurally coupled with humans, fully dependent on them for their existence but not for their behavior.
So we have all these human-enabled systems that are self-sustained and influence the behaviors of humans structurally coupled with them as if caring for their own viability.
Like the Lamp of Aladdin
These last two examples, AIs and social systems, somehow got linked in that episode with the "viral" post on LinkedIn. These two systems are, in a way, similar. The prompt irritated7 ChatGPT to generate a response. And the post, made from the screenshot of that response, irritated LinkedIn to generate a response.
Yes, the concrete comments were entered by humans, but they happened as a result of the engagement with that particular medium, its affordances, algorithms, the way it produces its network effects, and the way comments referred to each other and to other communications. It was the response of LinkedIn. The same "prompt" generated a completely different response from X-twitter and Bluesky. That would be the case even if all my followers were the same in all three media.
That’s not specific to the post itself. But the response was quick. Just like during the pandemic, it became easier to see how things are interdependent (as they had been before, too) because effects spread fast — so with this post, it became easier to see who the agent in the interaction was. Not LinkedIn as such, but the agency that emerges when people engage with the medium.
Social media and LLMs are like the lamp of Alladin. When you rub the lamp of social media, you communicate with the genie that emerges from this interaction. When you rub a chatbot, that genAI produces a genie.
McLuhan famously said that the medium is the message. He was right. But something similar happens with this emergent agency. The medium is the agent. Not really, but it appears that way, since you never interact with the medium itself — only with what emerges when you use it. And because that is ephemeral, it gets reified as the medium.
McLuhan also wrote that the content of any medium is always another medium. Does it apply by analogy to the emergent agent we interact with? So if we rub a medium, and like the lamp of Alladin, a genie appears, will that genie turn into another medium, which, if we rub, another genie will appear?
We don't know that, but we do know that media is capable of high-order effects. We also know that the genie that comes out of the lamp can be a monster.
Coda: How do we learn today?
All learning is a social act.
Book
Yes, reading a book is a social act, too.8 And again, there is something emergent during that interaction which has some autonomy and together, irritated but the text, the reader co-creates. The co-creation diminishes when watching a movie since the setting and the actors are chosen, and a big part of the interpretation is already done.
So, interacting with a book is still a practiced form of learning, with some claiming that it's in decline.
Writing
When we write, we enter into a loop. To paraphrase Karl Weick, how can we know what we think before seeing what we wrote? We start interacting with the text, and what comes next is influenced by our thoughts and by what we read we wrote. When we look at what we wrote (or drew), it is, as Ranulph Glanville put it, like listening in verbal communication. There is an ongoing interaction and a process of active learning. Writing is disciplined thinking, Luhmann wrote.
PKM
Another form is interacting with a personal knowledge management system. It was again Luhmann who, while building a life-long one, after some decades of using it, observed that it gets a life of its own.9 Such tools can become reliable serendipity generators.
Combining PKM and writing in public gave birth to the genre of digital gardens. Currently, few people practice that form of learning, but it seems to be quite effective, judging not only by self-reports but by what some of these practitioners create.
AI
A new learning mode, with increasing popularity, is interacting with an LLM through a chatbot interface or an API. Recently equipped with some capabilities to search and imitate reasoning, these tools are becoming a popular learning mode. The effectiveness of this learning mode heavily depends on the user's intelligence and/or affinity. Interacting with them can accelerate creative processes, but it can also reduce critical thinking, create addiction, and so on. And speaking of risks, LLMs, like social media, are easily infected by misinformation, unintentionally or via targeted propaganda.
Now, PKM and AI are often combined.10
Human
Learning from another human used to be a prevalent practice in the past: apprenticing. Then, of course, we had sermons and lectures, and now we have all kinds of presentations. We also have mentoring and coaching, along with various forms of one-to-one interaction ranging from casual chats to deep collaborations.
But now you know that even when physically there are two of you, systemically, you are three.
Again, modes of learning can be combined. A potentially successful combination of humans and books is book clubs.
Social media
Social media possesses the remarkable ability to amplify both positive and negative aspects. Unfortunately, the negative side hinders some individuals from fully appreciating that engaging on social media is a legitimate and potentially very effective method of learning. What can be learned from platforms like X-Twitter and LinkedIn is very different from, say, YouTube and TikTok or cozy web places like Discord forums. Protocol-based networks like Mastodon and Bluesky, while in some ways similar to platform-based networks, bring new forms of learning due to their ethos and reduced power asymmetries. An interesting phenomenon is Substack, which combines blogging, social media and some of the features of the cozy web.11
What is common between all these modes? It seems that learning is always a matter of interacting with an emergent agency.
But when and what do we learn?
Apart from the cases (mostly hypothetical) where all we get is something that we know already, we always learn something in every mode of learning. Admittedly, to a different degree, but still, always. Of course, when, in some modes, the learning is lower than some perceived alternative cost, we reduce that in favor of engaging in more rewarding modes of learning.
What do we learn? We learn about the topic of discourse. When a learning medium works this way, it is ready-to-hand, in Heideggerian terms.
When we don't learn about the topic of discourse, we can still learn about the interlocutor. As Heinz von Foerster said, "at the end of my lecture you may not know anything about [the subject of my lecture], but you will certainly know something about me." We can learn from the other or about the other. And that other need not be an interlocutor in the conventional sense but can be the medium — or the emergent agency this essay is about. When we learn about the medium through the medium itself, the medium becomes present-at-hand, in Heideggerian terms.
That post on LinkedIn was about AI. From what LinkedIn generated, I didn't learn anything new about AI, but I learned about LinkedIn. It may sound disappointing, but it shouldn't be. If the medium is a message, then it’s equally important. And if it is also an agent, even more so.
Related posts:
The interaction between computer science and cognitive science began when the brain was viewed as a computer. This metaphor emerged in the mid-20th century as computers were recognized as sophisticated systems capable of processing information. Cognitive science reciprocated this view by adopting the computer as a model for the brain, particularly evident in the computational theory of mind, which posits mental states as computational states and processes as computations. This paradigm dominated cognitive science, especially during the 1960s and 1970s, influencing both computationalist (symbolic processing) and connectionist (neural network) schools. However, this metaphor has encountered significant critique from enactive schools, which emphasize cognition as embodied action within an environment, and from complexity sciences, which focus on emergent, nonlinear dynamics.
For research supporting this claim, see, for example:
Egbert, M. D., & Barandiaran, X. E. (2014). Modeling habits as self-sustaining patterns of sensorimotor behavior. Frontiers in Human Neuroscience, 8. https://doi.org/10.3389/fnhum.2014.00590
RamÃrez-Vizcaya, S., & Froese, T. (2020). Agents of Habit: Refining the Artificial Life Route to Artificial Intelligence. 78–86. https://doi.org/10.1162/isal_a_00298
Colombetti, G. (2017). The Feeling Body. The MIT Press. https://mitpress.mit.edu/9780262533768/the-feeling-body/
Vahlo, J. (2017). An Enactive Account of the Autonomy of Videogame Gameplay. Game Studies, 17(1). https://gamestudies.org/1701/articles/vahlo
For more details, see the first two chapters of Personal Knowledge Graphs.
Worth noting that these answers were given in thousands of pages of rigorous argumentation. Luhmann took the concepts of autopoiesis and structural coupling developed by Maturana and Varela and adapted them to his systems theory. He distinguished three kinds of autopoietic systems: living, psychic and social. Social and psychic systems are structurally coupled. Psychic systems are reproduced by a network of connected thoughts. Social systems are reproduced by a network of communications. Social systems cannot exist without psychic systems, but their behaviors are determined by their internal structure, by the self-referential network of communications, and not by the psychic systems that are structurally coupled with them.
Here, irritates is used to suggest that the reaction is triggered by the intervention but not fully controlled by it. Autopoietic systems are operationally closed. Perturbations from the environment can provoke but not dictate changes within these systems. The internal structure of autopoietic systems shapes these changes. If you throw a stone, you can determine its trajectory, but not if you throw a living bird. If you kick a ball, its movement is determined by your kick, but if you kick a dog, the response is up to the dog.
Social domains are defined by a set of structural norms.
A social domain is thus a domain defined by the production of normatively constrained acts—which in many cases can perfectly well be performed by a single individual; although more often there will indeed be interactions between two or more individuals. Examples of these domains include reading a book, using a tool, walking in the street, reasoning with concepts and logical norms on a desert island, having a job interview, exercising illegal activity such as selling drug, riding one’s car, signing a contract, applying concepts, expressing an emotion, dancing a waltz, going to the doctor.
Steiner, P., & Stewart, J. (2009). From autonomy to heteronomy (and back): The enaction of social life. Phenomenology and the Cognitive Sciences, 8(4), 527–550. https://doi.org/10.1007/s11097-009-9139-1
As a result of extensive work with this technique a kind of secondary memory will arise, an alter ego with whom we can constantly communicate. It proves to be similar to our own memory in that it does not have a thoroughly constructed order of its entirety, not hierarchy, and most certainly no linear structure like a book. Just because of this, it gets its own life, independent of its author.
Luhmann, N. (1981). Kommunikation mit Zettelkästen. In H. Baier, H. M. Kepplinger, & K. Reumann (Eds.), Öffentliche Meinung und sozialer Wandel / Public Opinion and Social Change (pp. 222–228). VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-322-87749-9_19
Translation by Manfred Kuehn.
For some examples, see how AI is used in Tana or the Live AI Assitant extension in Roam Research. I hesitate to put NotebookLM as an example here, which, while useful in itself, is just an LLM with selected input.
Yet, being a platform with certain kinds of financial interests behind it, it won't last long for what it is liked for by the current creators. It shows the first sign of becoming something completely different. But for now, it's still a great place to learn, so let's enjoy it while it lasts.