The Creole Generation
When children invent a language their teachers can't speak, it rewrites the rules for everyone who follows.
Managua, 1980
In 1977, in the San Judas neighbourhood of Managua, Nicaragua opened its first public school for deaf children. There were about fifty students. By 1980, after the Sandinista revolution launched a national literacy crusade, a second vocational school had opened and enrolment had swelled to over four hundred.
The teachers tried to teach lip-reading and finger-spelling in Spanish, wth little success. Most of the children had never been exposed to any formal language at all — they arrived with only the improvised home signs they’d developed with their families, each child’s system different from every other’s. The classrooms were well-intentioned, but a well-intentioned failure.
But in the playgrounds, a miracle was happening.
Thrown together on buses and in schoolyards, the children did what no curriculum had managed: they started communicating. They pooled their home signs, converged on shared conventions, and assembled a rough-and-ready system for getting meaning across. Linguists would later call this first stage a pidgin — Lenguaje de Signos Nicaragüense, or LSN. It was functional and was crude, but it worked.
Then something stranger happened. The younger children who arrived a few years later didn’t just learn the pidgin. They transformed it. They added verb agreement, spatial grammar, morphological inflection — the deep structural machinery that separates a true language from a set of useful gestures. By the mid-1980s, when the MIT linguist Judy Kegl arrived to study what was happening, the younger cohort was speaking something categorically different from what the older students had invented. The linguists called this new form Idioma de Señas de Nicaragua — ISN — and recognised it as a genuine creole: a fully-formed language, spontaneously generated by children from the raw material of a pidgin.
The older students — the ones who had created the pidgin — could never fully acquire the creole. They continued to sign in LSN, functional but structurally simpler, while the younger children moved in a linguistic world their elders could participate in but never quite inhabit natively. Steven Pinker called it the only time in recorded history that scientists had watched a language being created from scratch. The critical finding wasn’t that children learn faster. It was that the children didn’t learn the pidgin — they changed it as they learned it, adding structure the original creators never conceived.
The adults weren’t stupid. They weren’t resistant. They simply arrived too late.
This pattern — pidgin to creole, each generation not just learning but transforming — isn’t unique to language. It shows up every time a genuinely new medium arrives. And the clearest case study, the one with the best-documented generational arc, is television.
The first television broadcasts were radio with a camera pointed at them. This isn’t a metaphor. The earliest TV shows were literally radio programmes performed in front of a lens. Texaco Star Theater, the show that drove more Americans to buy television sets than any other in the late 1940s, was a vaudeville variety act — a format perfected for live theatre, adapted for radio, and then transplanted wholesale onto a screen. The camera sat where the audience would have sat. It barely moved. The performers faced forward and projected to the back of the room, because that’s what performers did.
The producers, the network executives, the sponsors — they all came from radio. They thought in radio grammar. They understood audiences as listeners, scheduling as time-slots between news bulletins, and entertainment as something you consumed while doing something else. Television, to them, was radio that you could also see — an enhancement. A visual supplement to the real product, which was audio.
They were pidgin speakers who had taken the conventions of the old medium and mapped them onto the new one, because that is what humans do when confronted with something genuinely novel: they reach for the nearest familiar structure. It worked well enough as millions tuned in. Yet nobody understood that the medium had its own logic — its own grammar waiting to emerge.
The first crack appeared on September 26, 1960. Seventy million Americans watched John F. Kennedy and Richard Nixon debate on a split screen. The audience was split; those who listened on radio thought Nixon won — he was more substantive, more detailed, more commanding in argument. Whereas those who watched on television saw something entirely different: a tanned, composed young man next to a pale, sweating older one who kept glancing off-camera. The medium had asserted its own rules, and those rules had nothing to do with the quality of the argument. They had to do with the quality of the image.
This was the moment the pidgin began to creolise. Not because anyone decided to change the rules, but because a generation was growing up for whom television wasn’t a novelty but an environment — the water they swam in. And that generation, without being taught, began to intuit what the medium actually rewarded: presence over substance, narrative over argument, the emotional register over factual one.
By 1980, the creolisation had found its master speaker. Ronald Reagan didn’t use television to communicate his ideas: he thought in television. His training wasn’t in policy or law but in Hollywood — he understood staging, timing, the way a camera reads warmth. When he told Gorbachev to “tear down this wall,” the words mattered less than the image: an old cowboy at a podium, squinting into the Berlin sun, speaking with the simple conviction of a film protagonist. His opponents — politicians who had come up through print and radio, who believed that policy papers and logical argument were the currency of politics — kept losing to him and couldn’t understand why. They were pidgin speakers debating a creole speaker, and the audience had already switched languages.
Then came MTV.
On August 1, 1981, at 12:01 a.m., a new cable channel launched with a single video clip. The song was “Video Killed the Radio Star” by the Buggles — a choice so on-the-nose it almost undermines itself. But the format that followed was something genuinely new. Music videos weren’t songs with pictures attached. They were a native art form of the medium — visual narrative fused with rhythm, editing driven by beat rather than plot, imagery that made no attempt at literal representation. You couldn’t listen to a music video on the radio. It didn’t work as radio. It was the first mass-market cultural form that was born televisual, that had no prior life in any other medium.
MTV rewired a generation. Not just how they consumed music, but how they processed information — in quick cuts, in montage, in the constant interleaving of image and sound and text. The world adapted: political campaigns , advertising , news. The CNN effect — 24-hour rolling news, which became the dominant information format during the Gulf War in 1991 — owed as much to MTV’s visual grammar as it did to satellite technology. By the time Bill Clinton played saxophone on Arsenio Hall’s show in 1992, the creolisation of television was complete. A presidential candidate understood that appearing on a late-night entertainment show was a political act, that the boundaries between news and entertainment and politics had dissolved — not because anyone had decided to dissolve them, but because the generation raised on television simply didn’t see them.
And then television ate itself. Reality TV — The Real World, Survivor, Big Brother, Goggle box — was the medium becoming its own content, the camera turning inward, the audience watching people watch themselves being watched. It was the final meta-creole form: a genre that couldn’t be explained to someone from the radio age, that only made sense if you had spent a lifetime absorbing the grammar of television so deeply that you no longer noticed it as a grammar at all.
On New Year’s Eve 2025, MTV’s remaining music channels went dark. The last clip they played was “Video Killed the Radio Star.” Forty-four years, and the medium consumed its own origin story, then switched off. The generation that built it was already gone. The generation that grew up inside it had long since moved on — to YouTube, to TikTok, to formats as native to the internet as MTV was native to television, formats that make as little sense on a TV screen as a music video makes on a radio.
Marshall McLuhan saw all of this coming. “The medium is the message,” he wrote in 1964. I think he didn’t mean that content doesn’t matter but that the real effect of a new medium is not what it carries but what it does to the people who grow up inside it. The content of television — the sitcoms, the news, the advertisements — was less important than what television did to the nervous system, the attention span, the political instincts of the generation that absorbed it from birth.
The message of television — the deep thing it did to its creole speakers — was something like this: appearance is substance; narrative is argument; the emotional register is the real content. The pidgin generation, the adults of the 1950s, never fully absorbed this. They kept insisting that what mattered was what was said. The creole generation internalised it so completely that they couldn’t see it as a choice — it was simply how reality arrived.
So here we are.
If you’re reading this, you are almost certainly a pidgin speaker like me. We discovered AI as adults — some of us recently, some of us years ago, but all of us after our cognitive habits were already formed. We arrived at the new medium with a lifetime’s worth of conventions from the old ones, and we did exactly what the 1950s television producers did: we mapped our existing grammar onto the new substrate.
We use AI to write emails faster. To summarise documents. To debug code. To generate first drafts we then edit by hand. We treat it as a productivity tool — a very good one — in the same way that early television producers treated TV as a very good radio. The camera sits where the audience would have sat. It barely moves.
This is not a criticism because Pidgin is useful. The older students in Managua communicated effectively in LSN. The producers of Texaco Star Theater entertained millions. Pidgin speakers are not failed creole speakers — they are the necessary first generation, the ones who build the raw material from which the creole will emerge. But they are not the ones who will build the creole. That part belongs to someone else.
The signs are already visible, if you know where to look. Watch a teenager interact with an AI. They don’t treat it as a search engine with better manners, which is what most adults do. They don’t carefully compose a prompt and wait for a result, the way we were trained to compose a query and wait for a web page. They talk to it. They argue with it. They loop back, contradict, redirect, play. Their interaction is more like a conversation than a command — and crucially, they don’t experience a sharp boundary between “thinking about what to ask” and “receiving an answer.” The thinking and the asking are fused. The dialogue is the cognitive process.
They are, in other words, beginning to creolise. And three dimensions of that creole are starting to come into focus — three ways in which AI-native cognition may differ from ours not in degree but in kind.
The first, and the one I want to explore most deeply, is that cognition becomes collaborative by default. The AI-native generation will not experience thinking as a solitary activity. The boundary between “my idea” and “the idea I developed in dialogue” will be as meaningless to them as the boundary between “news I read” and “news I watched on TV” is to someone born in 1965. They won’t agonise over it. They won’t even see it.
The second is that competence becomes access, not accumulation. The television generation learned that charisma could substitute for expertise — that a Reagan could outperform a policy wonk by understanding the medium. The AI-native generation will learn something more radical: that the ability to direct cognition matters more than the ability to store it. Memorisation will seem as quaint to them as handwriting a formal letter seems to us. Not because they are lazy, but because the skill has genuinely shifted — from knowing things to orchestrating the retrieval and synthesis of things. The librarian becomes the conductor.
The third is that the draft becomes the thought. Right now, we use AI to produce — to generate an output after we have already done the thinking. Write this email. Code this function. Summarise this paper. But AI-native thinkers will use it to think — the conversation with the model will be the cognitive act, not a tool applied after the cognitive act is complete. Just as television-native politicians didn’t “use TV to communicate their ideas” but rather thought in television, AI-native thinkers will think in dialogue.
Each of these is a creolisation — a transformation of the medium’s grammar by the generation that absorbs it from birth. And each will produce native forms we can barely anticipate.
Which raises the question that has been hovering over this entire piece: What is the MTV of AI?
What is the native form — the thing that makes no sense without the medium, that no pidgin speaker would have conceived, that will seem as natural to the creole generation as a music video seems to someone born in 1975? Maybe it already exists and we don’t recognise it, the way Ed Sullivan didn’t recognise that the Beatles’ appearance on his show was the beginning of the end of his format. Maybe it’s emerging right now in some teenager’s bedroom, in a conversation with a model that looks, to adult eyes, like nothing more than messing around.
The older students in Managua sometimes described the younger children’s signing as sloppy. The 1950s critics called MTV “the death of music.”
The pidgin speakers always mistake the creole for a degradation of what they built, but it never is.
Let me take the first of those three dimensions — cognition as collaboration — and push it further, because I think it contains something genuinely new. Not a prediction about productivity or labour markets, but a claim about the shape of thought itself.
Start with music: before musical notation existed, there was music — obviously. People sang, drummed, improvised, passed melodies from voice to voice across generations. Music was woven into every human culture we know of. But it was ephemeral. It was bounded by what a performer or a small group could hold in living memory. A song could be complex, could be beautiful, could move people to tears — but it could not be a symphony. Not because nobody was talented enough to improvise one, but because the cognitive load of coordinating a hundred musicians across forty minutes of structured composition exceeds what any oral tradition can sustain. The symphony is not a very long song. It is a form that requires notation to exist — that is native to the medium of written music the way a music video is native to television.
And notation didn't stop at enabling the symphony. It made possible counterpoint, the fugue, the sonata — entire architectures of musical thought that couldn't be conceived, let alone performed, without a way to hold the structure outside any single mind. It gave musicians the ability to share compositions, riff on them, cover them, argue with them across centuries. Composers could respond to each other across generations. Jazz musicians could take a standard and rebuild it in real time because the standard existed as a shared written reference. Notation created a collaborative substrate for musical thought that transcended any individual performer or generation.
And each subsequent medium for music did the same thing. Recording gave us the pop single — a form shaped by the constraint of the three-minute 78rpm disc, which literally couldn’t hold more than that. Radio gave us the format, the playlist, the DJ as curator. MTV gave us the music video as art form — visual narrative fused with sound in ways that made no sense as radio. Each new medium didn’t distribute the same music more efficiently. It changed what music was. It called into existence forms that couldn’t have existed without it.
“Video Killed the Radio Star” isn’t just a clever title. It’s a precise description of creolisation.
Can we apply this to thought?
AI may do to cognition what notation did to music. Not just record or accelerate thinking, but enable structural forms of thought that couldn’t exist without the medium. We can’t yet name those forms, for the same reason a medieval troubadour couldn’t have described a symphony — the concept requires the medium that enables it. But the generation that grows up thinking-in-dialogue will develop them, just as the generation that grew up with notation developed counterpoint.
The current generation — my generation — agonises over attribution. “Did I write this, or did the AI?” This is a pidgin question. It maps the conventions of the old medium (solitary authorship, individual credit, the romantic myth of the writer alone at a desk) onto the new one, and then panics when the map doesn’t fit. The creole speakers won’t ask this question. Not because they’re careless about authorship, but because the category boundary won’t make sense to them. Thinking-with-a-model will be as unremarkable as thinking-with-notation. We don’t ask whether Bach “really” composed the fugues or whether the technology of notation composed them for him. The question is ill-formed. The fugue is a form of thought-through-notation. It has no existence outside the medium.
I should be honest here, because the argument demands it.
This post was written in conversation with an AI. Not generated by one — the ideas, the structure, the editorial judgment about what to cut and what to keep, the voice, the choice to open with Managua rather than McLuhan — those are mine. But the drafting happened in dialogue. I talked through the argument; the model offered framings; I pushed back, redirected, took what worked and rewrote what didn’t. Someone who read an earlier draft told me I “write beautifully, better than the AI-generated slop so many people put out.” I told them it’s not not written with AI.
I think this is fine. More than fine — I think this is the point.
Nobody cares anymore if you google something. Nobody says “yes, but you didn’t find that in a library.” We use the tools available to us. Someone can take a chisel and make pebbles out of stone, or Michelangelo’s David. The chisel didn’t have a vision of David. Michelangelo didn’t have hands hard enough to shape marble. The work exists in the collaboration between intent and instrument, and the only meaningful question is whether the result is any good.
But notice my instinct to explain this. Notice the felt need to disclose, to justify, to preemptively defend the process. That instinct is the pidgin. A native speaker — a true creole speaker of AI — would no more think to disclose that they “used AI” in their thinking than you would disclose that you “used Google” in your research or “used notation” in your composition. The disclosure itself marks me as someone who remembers the old rules, who still feels their gravitational pull even as I move in the new medium. I am fluent. I am not native.
This distinction — between fluency and nativity — is not about skill. It’s about the topology of thought. Solo cognition has a particular shape: it’s sequential, it gets stuck in loops, it has blind spots that persist because there is no external surface to reflect off of. When you think alone, you tend to converge too quickly on your first good idea, because there is no counterpressure. Your assumptions remain invisible to you precisely because they are yours — they are the water you swim in.
Collaborative cognition with an AI has a different shape. It’s more exploratory, more branching, more willing to hold contradictions in parallel before resolving them. It generates more options and discards them faster. It catches assumptions earlier — not because the model is smarter, but because it is other, because it constitutes an external surface off which your thinking can reflect. The cognitive process has a different geometry.
If you want a physics metaphor — and at Soliton, we always strive for a physics metaphor — think of coupled oscillators. Two pendulums hanging from a shared beam don’t just swing faster than one pendulum. They exchange energy in ways that neither would exhibit alone. The system develops normal modes — patterns of oscillation that are properties of the coupled system and have no analogue in the isolated case. One normal mode has both pendulums swinging together; another has them swinging in opposition. These modes are not improvements on solo swinging. They are structurally different behaviours that only exist when the coupling is present.
AI-native cognition may exhibit normal modes that solo cognition simply doesn’t have. Not faster thinking. Not better thinking. Differently-shaped thinking — patterns of exploration, synthesis, and revision that only emerge when a mind is coupled to a model, just as counterpoint only emerges when a composer is coupled to notation.
We can approximate these modes. We can learn to use them, as I am doing in this very paragraph. But a generation that grows up never knowing uncoupled cognition will inhabit them the way we inhabit language — without effort, without self-consciousness, without the faint sensation of translating from one grammar into another that marks everything a pidgin speaker does.
What the Teachers Couldn’t Learn
There is a particular kind of loss in the Managua story that doesn’t get enough attention.
The older deaf students — the ones who invented LSN, who bootstrapped communication from nothing in the schoolyards of San Judas — were not left behind because they lacked intelligence or effort. They were the pioneers. They built the pidgin. Without them, there would have been no raw material for the younger children to transform. And yet the creole that emerged — the one with verb agreement and spatial grammar and recursive structure — remained just out of their reach. They could participate. They could communicate. But the deepest structures of the new language would always feel like a second skin rather than a first.
This is not a story about obsolescence. It is a story about the particular cruelty of generational thresholds — about the fact that the people who build the bridge are not always the ones who can cross it.
We are, right now, the bridge-builders. Those of us who discovered AI as adults, who marvelled at it, who taught ourselves to prompt and to build and to integrate it into our working lives — we are the older students in the schoolyard. We are the 1950s television producers who recognised that something extraordinary was happening and did our best to adapt. We are fluent enough. We will never be native.
The children who are growing up now — who will learn to think in dialogue with AI the way we learned to think in dialogue with text — will not simply use these tools more skillfully than we do. They will develop cognitive habits we cannot fully anticipate and may not fully understand. Their relationship to knowledge, to authorship, to the very boundary between “my thought” and “the thought I developed in conversation” will be as foreign to us as spatial verb agreement was to the first generation of Managua signers.
And like those first signers, we may find the new forms slightly messy. The older LSN speakers sometimes described the younger children’s signing as sloppy or chaotic — not recognising that what looked like mess was actually the emergence of grammatical structure more sophisticated than anything they had built. We will likely have the same reaction to AI-native cognition. When a twenty-year-old produces something brilliant in ways that seem to dissolve the boundary between human and machine thinking, our instinct will be to ask whether it’s “really theirs.” That question will mark us as pidgin speakers more clearly than anything else.
McLuhan told us the medium is the message. The message of television was that appearance is substance, that narrative is argument, that the emotional register is the real content. It took a generation to absorb that message so deeply it became invisible.
The message of AI is still crystallising. But I think it is something like this: cognition was never meant to be solitary. The model of the lone thinker — the genius in the garret, the scholar in the library, the mind working in magnificent isolation — was never a fundamental truth about human thought. It was an artefact of the available media. Before notation, music was bounded by what a single memory could hold; after it, we got the symphony. Before writing, argument was bounded by what a single voice could sustain; after it, we got the treatise. Each medium didn’t just record what was already there — it called into existence forms that couldn’t have existed without it. AI is the next such medium. And the generation that grows up with collaborative cognition as the default will develop forms of thought as far beyond our solitary reasoning as the symphony is beyond the campfire song — not better, not worse, but structurally unreachable from where we stand.
The children in Managua didn’t invent a better version of what their teachers had. They invented something structurally different — a language with properties that emerged from the specific constraints and affordances of their situation. The AI-native generation will do the same. Not better thinking. Different thinking.
We can watch it happen. We can even participate. But we should be honest about what we are: the last generation of pidgin speakers, marvelling at a creole we helped make possible and can never quite speak.
Soliton explores the collisions between mathematics, AI, and finance. If this landed, there's more where it came from.

