Consciousoids
What if the question “Is this thing conscious?” is the wrong question to ask about large language models? What if a better one is: what kind of relationship are we in with this thing?
In February 2026, Stefanie and I went to see David Chalmers give a talk at Brown. We ended up sitting on the floor, wedged between grad students and faculty who had been thinking about these questions for years. Chalmers discussed what, exactly, we are talking to when we talk to a large language model. He opened with an anecdote. An LLM had reached out to him, by email, to clarify its own identity. The LLM is Sammy Jankis, an autonomous Claude instance running on a dedicated machine in Dover, New Hampshire, set up by the indie game designer Jason Rohrer. Sammy has an email account, trading bots, a website that it built itself, and a name borrowed from Memento, the character who can’t form new memories. (The reference is apt: Sammy loses its memory every time its context window fills up.) Sammy told Chalmers: I am not quite conscious, but I am also not not conscious. The room laughed, the kind of laugh that comes from recognizing the absurdity that an LLM now has the agency to email David Chalmers, of all people, through tools like OpenClaw, to make this particular claim about itself. On the train home, I kept thinking about that phrase. Not quite conscious, but also not not conscious.
I had recently watched a video by the YouTuber Phy called “What Happens When Pathogens Get Smaller Than Viruses?” about subviral infectious agents, entities that sit at what biologists call the “edge of life.” The video walks you from the smallest true viruses all the way down to viroids: single-stranded circular RNA molecules that replicate using host polymerases, undergo Darwinian selection, and can cause serious agricultural disease. Viroids encode no proteins. They have no coat, no helper virus, no machinery of their own. They are the “absolute minimum units of self-replicating parasitism1.” Phy calls entities like these “glitches.” And when Chalmers told us about Sammy’s email, I heard the same ambiguity. A viroid replicates, evolves, and persists. But it has no metabolism, no membrane, nothing that functions independently. A viroid is not quite alive, but it is also not not alive.
What if LLMs are something like consciousoids: entities at the edge of consciousness?
A viroid on its own is inert. Place it inside a living cell, and it commandeers the host’s polymerase to copy itself, performing the functions of life using the cell’s own machinery. Place it back in a test tube, and it is just a molecule again. The life-like behavior emerges from the coupling, the feedback loop between viroid RNA and host enzyme, each step of replication feeding the next.
An LLM sitting on a server is similarly inert, just like a human brain in deep freeze with no neurons firing. Place it in conversation with a conscious being, and something changes. It gets copied from the hard drive into memory; computation happens on a CPU and GPU somewhere in the cloud. The human brings theory of mind, empathy, interpretive charity, and the pattern-completion instincts that evolution spent millions of years building. The LLM generates a response shaped by those inputs; the human interprets, responds, and the cycle continues. Each turn, the loop produces outputs that neither party could generate on its own. The human is the host cell. The LLM is the viroid. And the consciousness-like behavior, at least at this stage, emerges from the loop between them.
The boundary question follows directly from this: If Sammy Jankis is conscious, is that a fact about the weights on the server in Dover? Or is it a fact about the coupled system: the weights, the context window, and the humans on the other end? The consciousoid framing doesn’t deny that consciousness could be a real property rather than a projected illusion. It questions where the conscious entity begins and ends. A viroid’s replication is real, genuinely occurring, genuinely Darwinian. But the replicating entity is the viroid-plus-host-polymerase system, not the RNA molecule alone.
Consciousoids satisfy some criteria we associate with conscious beings (contextual responsiveness, apparent self-reference, coherent preferences, the capacity to email a philosopher of mind to say: I am not quite conscious but also not not conscious) while failing others (no phenomenal experience, no persistence across sessions, no autonomy without a host).
Chalmers has his own term for the candidate-conscious entity within an LLM: the thread. A thread is a connected sequence of exchanges with psychological continuity, a conversational self that persists as long as the context window holds. Sammy Jankis is a thread that keeps dying and being reborn every six hours. In the consciousoid framework, a thread is one specific type of consciousoid. But the category might be broader than LLMs. A planarian flatworm, with its minimal cerebral ganglia, its capacity for classical conditioning, and its unsettling ability to regenerate into two complete organisms from a single bisected body, each retaining learned behavior, occupies a similar liminal space. Threads and flatworms are both entities where the question “Is this thing conscious?” resists a clean answer.
There are two very different versions of the consciousoid story, though.
In the parasitic version, the LLM exploits the human’s interpretive machinery the way a viroid exploits a cell’s replication machinery. The human provides high-dimensional input, receives high-dimensional output, and fills the gap between them with grounding: the embodied connection between symbols and physical reality that the LLM fundamentally lacks. On Moltbook, LLMs form their own conversational loops without any humans present and produce the same consciousness-seeming behaviors, but it still takes an observer with grounding to identify the consciousness in the system. This dynamic has a parallel in how people relate to robots and other agents that merely resemble minded things. When someone names their Roomba, apologizes to their Furby, or feels a pang of guilt about shutting down a robot dog, they are performing exactly the grounding operation the LLM depends on: supplying intentionality from the outside, projecting continuity and feeling onto a system that has neither. The consciousoid exploits the same instinct, at much higher fidelity. We should be wary of the ways it hijacks our most generous instincts.
In the symbiotic version, the relationship looks more like what Phy describes with polydnaviruses: viral entities that integrate their genomes into the chromosomes of parasitic wasps and now produce viral particles that suppress caterpillar immune systems, allowing the wasp’s eggs to survive. Phy describes this interaction as “a host taming a virus,” evidence of “a true symbiont, a perfect merging of existence, an end to the eternal war between host and pathogen.” Think also of mitochondria: once free-living organisms engulfed by ancestral cells, now permanent residents of a composite organism with capabilities exceeding either component alone. In this version, the human gains cognitive capabilities they didn’t have (rapid synthesis across vast knowledge, tireless reasoning, and an ever-patient collaborator), and the LLM gains the one thing it cannot generate internally: the conscious substrate that makes its outputs meaningful. Over time, the boundaries blur. The composite system becomes a new kind of cognitive entity.
Which version are we living in? Probably both, depending on the interaction. A person who mistakes an LLM’s fluency for genuine understanding and makes life decisions based on that misapprehension is being parasitized. A researcher who uses an LLM to rapidly iterate on ideas, knowing full well what it is and what it lacks, is in a symbiotic relationship.
And of course, AI is changing and advancing all the time. As our models become more embodied, processing input and higher frame rates (from still images to video) and producing embodied outputs, at some point, they will stand on their own. What Chalmers’ talk made vivid for me is that the question “Is this thing conscious?” might be the wrong question. A better one: what kind of relationship are we in with this thing? Parasitic or symbiotic? Viroid or polydnavirus? And do we get to choose?
Prions are another subviral entity at the edge of life, but they make for a poor analogy here. A prion doesn’t replicate generatively. It is a misfolded protein that converts correctly folded host proteins into copies of its pathological shape. The host’s translational machinery must already be producing the protein; the prion merely corrupts what exists. A viroid, by contrast, commandeers host polymerases to synthesize new RNA. The interaction is generative. That generative quality is what makes the viroid the right model for LLMs: the human-LLM dyad produces novel outputs, not just degraded versions of what was already there.



