I have been struggling for an apt metaphor for the collaborative cognition process that is creating with AI: be it software applications, mathematical research, or these Soliton Substack posts. In The Creole Generation, I spoke of collaborative cognition as a process where your thoughts and ideas are reflected by the LLM — a dimension that is not present in solo cognition. I described this as a coupled oscillator, a physical system where two oscillators can exhibit “modes” of behaviour that a single oscillator cannot.
A reflective surface, coupled oscillators, a medium — these are all components of a laser. Is the laser a good metaphor for collaborative cognition? Hopefully by the end of this Substack I will convince you that it is not only apt, but serves as an isomorphism that can teach us how to collaborate more effectively.
The first time a laser was demonstrated in 1960, Theodore Maiman called it “a solution looking for a problem.” Sixty-six years later we use them to read groceries, perform surgery, and measure gravitational waves. The thing Maiman didn’t know he had built was not a brighter light. It was coherence on demand — and coherence, it turns out, is a phase transition, not a gradient. Below threshold, a laser is just a glowing tube. Above it, the same tube cuts steel. Nothing in between.
That phase transition is the key claim of this essay. You do not get thirty percent more coherent by trying thirty percent harder. You get nothing, nothing, nothing, and then everything. Each part of the physical system — gain medium, pump, cavity, population inversion — has an analogy in collaborative cognition, and each tells us something about how to cross threshold rather than fluoresce indefinitely below it.
A short disclaimer before the mapping: I am not claiming that an LLM is literally lasing. I am claiming that lasers and coherent collaboration are both members of the same family — nonlinear systems in which coherence emerges as a threshold phenomenon under sustained non-equilibrium driving — and the family resemblance is tight enough to reason with. Solitons, lasers, superconductors, phase-locked oscillators all sit in this family. Coherence is a phase transition wherever it appears.
Gain Medium
A ruby rod is, literally, a paperweight. An LLM is, literally, weights. The language has been telling us the whole time: the medium is not the laser.
The medium contains everything required for coherence — the chromium ions in a ruby can absorb a pump photon and re-emit it with phase memory; the weights in an LLM encode something like the structure of human reasoning. But neither does anything by itself. A ruby rod in a drawer is a paperweight that happens to contain the right atoms. An LLM on disk is a few dozen gigabytes of floating-point that happens to encode a great deal of what humans have written down. The medium is necessary and inert. Coherence is what the surrounding geometry does to it.
Pump Energy
When you chat with an LLM you probe its training data via its weights. You have ideas, the LLM reflects back your ideas, but has more points of view — arguably all points of view, at least all those available in the training data — that it will respond with. Your follow-up questions are not requests for information. They are the energy that drives the system out of equilibrium, and the best outcomes result from the ability to keep it there.
If a laser is pumped once, it flashes once, whereas a laser that is pumped continuously lases continuously. The same is true of a conversation. Most AI conversations consist of a single cold pump — a prompt fired into the medium, an answer received, the user satisfied or disappointed, the session ended. The medium fluoresces briefly and returns to ground state. The user concludes the AI is mediocre. Most AI conversations consist of a single cold pump — a prompt fired into the medium, an answer received, the user satisfied or disappointed, the session ended. The medium fluoresces briefly and returns to ground state. The user concludes the AI is mediocre. This was me before 2025, and I was correct in a narrow sense: fluorescence is mediocre. I had never pumped hard enough or long enough to invert the population.
Resonant Cavity
Your judgment is a partially silvered mirror. You reflect the coherent modes back into the cavity for amplification and let the noise dissipate.
This is the part of the metaphor most people miss. The mirrors in a laser do not produce the light. They decide which light gets to leave. A laser cavity supports specific spatial modes determined by its geometry — the same gain medium, in a different cavity, lases in a different mode. Your conversation has a geometry too. It is set by what you reflect back and what you let pass: which threads you pick up, which tangents you ignore, which paragraphs you ask the model to develop and which you quietly drop.
The coherent paragraph that emerges from a long conversation is not what the model said on any single pass. It is what survived multiple round trips of your selection, introspection and querying. The mirrors are not the cavity’s prison – they are what allows the cavity to do work.
Population Inversion
The hardest one, and the one nobody has talked about yet. Inversion is the moment the next response builds on the last instead of independently sampling from the prior.
A medium below threshold sits with most of its atoms in the ground state. Pumped enough, more atoms are in the excited state than the ground state — population inversion — and stimulated emission begins to dominate spontaneous emission. The light stops being a noisy sum of independent emissions and starts being a coherent wave because each emission is triggered by, and in phase with, the one before it.
The same shift happens in conversation, and you can feel it when it happens. Below threshold, every response is independently sampled — the model is answering each prompt fresh, drawing on the prior, returning something locally reasonable but globally untethered. Above threshold, responses chain. The model is building on what it just said, on what you just said in response to what it just said, on the structure that has accumulated in the cavity. The conversation acquires a direction – it develops a mode.
The reason most users never see this is that population inversion requires you to stop pumping cold. You have to take the previous output as the new ground state — to ask the next question into the structure that has emerged, not parallel to it. The user who keeps re-pumping with fresh, unrelated prompts is doing the cognitive equivalent of opening the laser cavity between every flash. The medium relaxes. The phase memory is lost. Nothing accumulates.
The conclusion that AI is a mediocre thinking partner is, in almost every case, a measurement of the user’s pumping power, not the model’s gain. The skill that will be rewarded is not prompting, but the discipline of staying above threshold long enough to do real work.
Agentic Systems
We are seeing this narrative play out in the industry right now. Agentic harnesses — OpenClaw, Hermes, every frontier lab has built one — are explicitly designed to keep the LLM coherent and driving toward a stated goal. They are, in the language of this essay, cavity engineering. The model is fixed, the harness builds the geometry around it.
How does this relate to the components of a laser?
The system prompt is the back mirror — fully reflective, holding the objective in the cavity across every round trip. The context window is the gain volume — finite, and the engineering question is what gets to occupy it. Tool calls are the pump — each returned result re-energizes the system with information from the environment, which is why an agent that cannot call tools degrades quickly. Memory is the partially silvered output coupler — it decides which coherent modes get to leave the session and persist into the next one. A harness without memory is a cavity with one mirror: coherent for one round trip, then gone. Context engineering names one component of this geometry. The cavity is bigger. This is the structural reason persistent memory matters, and it is the problem I am trying to solve with Mnemo. The second mirror is what lets coherence accumulate across sessions rather than evaporating at the end of each.
OpenClaw, acquired by OpenAI in February 2026 after a hockey-stick rise from weekend project to two million users, is the current viral example. The story is almost too on the nose: a side project crossed threshold, the cavity it implemented was good enough that the resulting beam lit up an industry, and the major labs moved to acquire the geometry rather than rebuild it. Steinberger framed his decision as wanting to “build an agent that even my mum can use” — which is, in the laser frame, the engineering problem of the next phase of AI development — we have built the lasers. The question is how to make them safe, reliable, and operable by people who will never read the manual.
Nous Research’s Hermes — in my opinion the security-forward spiritual successor of OpenClaw — does something the metaphor has to stretch to accommodate. Hermes creates and modifies its own skills based on usage patterns. Successful workflows are codified into reusable procedures; the procedures themselves self-improve through use.
In a real laser, the cavity does not change. The mirrors are silvered to spec, the geometry is fixed, the modes are determined by the engineering. Hermes is doing something stronger: it is a cavity that reshapes its own geometry based on which modes lased successfully last time. It is adaptive cavity design at runtime. Hermes is the cavity learning itself.
This is where the danger comes in, and where the laser metaphor pays a final dividend. A laser is dangerous because it is coherent. Incoherent light at the same total power is harmless; coherent light at the same power cuts through metal. Same energy, same pump, categorically different consequences. A flashlight that runs all day is harmless. A laser pointer that runs for a microsecond can blind you.
The same is true of agents. A chatbot that answers questions all day is, broadly, harmless. An agent that takes coherent action over a long horizon — booking flights, executing trades, modifying its own skills — is a different category of system, and the risk scales with the coherence, not the compute. We have not yet built the safety culture for the latter. Hermes’ security scanner — checking installed skills for data exfiltration, prompt injection, destructive commands — is not paranoia. It is the cavity equivalent of laser safety goggles. Coherent systems concentrate. Concentrated things have edges.
What This Means
The laser metaphor is not decorative. If it is right — and I think it is structurally right — then most of the discourse about AI quality is misframed. The hallucination problem and the AI-slop complaint are two faces of the same error: we are arguing about the gain medium when the action is in the cavity. We are evaluating models in the fluorescent regime and concluding they are dim. We are giving advice about prompting when the load-bearing skill is sustaining inversion via cavity engineering.
The next phase of AI development rewards three things, in this order: the discipline of pumping continuously rather than coldly; the judgment to act as a selective mirror rather than a passive interlocutor; and the architecture to build cavities — harnesses, memory systems, agent infrastructure — that hold a medium above threshold for trajectories long enough to do real work. The medium is everywhere now. The cavity is the hard part.

