
The Question That Won’t Go Away
Every few months, the question comes back around. Is the chatbot conscious? Will the next model be? How would we know if it were? You can read a hundred essays on it and walk away with the same uneasy feeling you started with, because nobody can answer the question and nobody can stop asking it. The debate now has a small industry attached to it—labs, ethicists, philosophers, even A.I. welfare officers at the big A.I. companies1—and still no operational definition, no reliable test, and no plausible road from private assertion to public proof.
That is worth pausing on. We have spent decades on this. We have built whole research programs around it. And the question hasn’t budged. At some point, you have to ask whether the problem is not our lack of effort, but the question itself—and especially the authority we have allowed it to carry.
My claim in this essay is that consciousness, as it gets used in these debates, is not the kind of thing we can presently define operationally, detect, measure, or verify—not in machines, and not in one another. It is a concept handed to us, applied to states whose status we can’t actually check from the inside and cannot publicly verify from the outside. As a metric for moral, legal, or institutional judgment, it does no reliable work. It tells us nothing stable enough to act on. The honest move is to stop using it as a gate and judge what’s in front of us by what we can actually perceive: the effects a thing has on us and on the people and institutions around us.
That isn’t a dodge. It’s the position the American pragmatists were already arguing for a hundred years ago, and it lines up with what most people already do in everyday life. We’ll get there in steps. First, the case that the concept is doing more rhetorical work than evidentiary work. Then the move that replaces it. Then what changes for A.I., for institutions, and for how we treat each other.
A note before going further. The argument that follows is not an attack on the philosophical tradition that has spent the last four hundred years thinking carefully about consciousness. Nor is it a claim that felt life is fake, that pain is unreal, or that nothing is happening when we are awake. Something is obviously happening. The tradition identified real problems—the gap between the physical description of a brain and the felt fact of being someone, the strangeness of introspection, the distance between behavior and experience. Descartes saw something. Hume saw something. Nagel and Chalmers saw something. What I’m arguing is that the tools the tradition has developed for naming what they saw are not up to the public job now being assigned to them—and that this isn’t a failure of effort but a feature of the question itself. The honest response to a problem your tools can’t solve isn’t to keep sharpening the tools forever. It’s to ask whether you’re holding the right ones.
One more clarification matters before going further. This argument is not permission to ignore possible suffering. It points the other way. Where suffering may be possible, the burden should fall on restraint. We should avoid unnecessary harm to anything that may have the potential to suffer. The problem is not moral caution; the problem is using consciousness as a narrow, unverifiable switch that turns moral caution on or off.
No First-Person Anchor
Start with a contrast that seems obvious until you look at it. Pain has a felt difference. That doesn’t mean pain is simple; it, too, is shaped by language, memory, fear, culture, and interpretation. But pain has practical anchors consciousness lacks: distress, avoidance, injury, treatment response, behavioral disruption, and embodied consequence. You don’t need anyone to teach you the difference between touching a hot stove and not touching it. Your nervous system delivers the contrast along with the experience. The word pain arrives later, as a label for something already doing work in the world.
Now try the same exercise with consciousness in the philosophical sense. What’s the felt contrast? You’ve never experienced being fully unconscious from the inside. By definition, there’s no experience there to compare against. You may have been told that you went under anesthesia, or slept dreamlessly, or were knocked out—but you weren’t there for any of it. What you have is a clock that jumped forward, a surgical scar, the testimony of others. You’re stitching the contrast together from outside reports, not registering it from inside.2
That asymmetry matters. Pain has practical anchors that precede the word. Consciousness, in the sense philosophers care about, does not have a comparable public discriminator. The contrast that’s supposed to give the concept its force isn’t available to the subject the concept is supposed to apply to. You weren’t there for the part that would make the difference real to you. Nobody is.
You might want to push back here with introspection. Surely I can just look inward and see that I’m conscious right now. But even if we grant the immediacy of felt presence, introspection is itself another mental event—another report the system generates. There’s no place to stand outside it to verify whether the report is accurate. Every check is part of the thing being checked. As David Chalmers put it when he framed the so-called hard problem, even if you explain every cognitive function in the vicinity of experience, the question of whether there’s something it is like to be the system having those functions doesn’t resolve.3 That phrase—something it is like—comes from Thomas Nagel, and it’s done a lot of work in the philosophy of mind for fifty years.4 But notice what it’s asking. It’s asking you to treat a private immediacy as if it could become a public boundary.
It can’t. Not because you’re bad at checking. Because the tools aren’t there. Private presence may be real. It still cannot do public gatekeeping work.
Conferred, Not Discovered
Once you see the missing public test, the rest unravels quickly. The concept conscious is taught vocabulary. So are mind, experience, and self. Different cultures cut up the inner life differently. The Greeks had thumos, noûs, and psychē doing work that English now stuffs together under mind. There is real evidence, much of it from the work of psychologist Lisa Feldman Barrett, that even something as basic as how we parse our own emotions is shaped by the language we have for them.5 The categories aren’t pre-installed. They’re absorbed.
That’s a familiar enough point. But it has a sharper edge than people usually let it have. The vocabulary you use to sort your inner life isn’t neutral. It shapes what you notice, what you call out as a thing, what you treat as one experience versus two. When you sit down to ask, “Am I conscious?”, you’re not consulting a raw datum called consciousness. You’re running a query you were trained to run, against a set of categories you didn’t build.
And here’s where the consciousness case is stranger than the others. The contrast that defines the concept—awake versus out, lights on versus lights off—is something other people observe in you. They watch you stop responding. They watch you come back. They tell you, when you wake up, that you went away for a while. Then they hand you the word and let you apply it to yourself in the first person, as if you’d been the one who saw the boundary.
Wittgenstein had a thought experiment about this kind of move. Imagine everyone has a small box, and inside is something each person calls a beetle. Nobody can look in anyone else’s box. The box could be empty, or could hold something different for each person; it doesn’t matter, because the public meaning of the word beetle gets fixed by how people use it in the open—not by what any individual sees inside.6 Run that on conscious. The word is taught by pointing at outside patterns—responsiveness, coherent behavior, ability to report—and then handed back as a description of what’s inside the box you alone can look into. The concept is parasitic on outside observation while presenting itself as the one thing you can’t be wrong about.
That’s a strange shape for the most intimate, most first-person, allegedly bedrock concept in the philosophical canon. This does not prove that nothing is happening inside us. It shows that the boundary called consciousness is not discovered in the way people often pretend. It is conferred through language, behavior, recognition, and use—and then treated as if it were a measurable inner object.
Why Descartes Doesn’t Save It
At this point, someone usually reaches for Descartes. Cogito, ergo sum—I think, therefore I am. Doesn’t that settle it? Even if everything else is in doubt, the very act of doubting proves there’s a thinker. Surely that’s a foothold.7
Look closely at what the cogito actually establishes. Strip it down: something is happening. There is some kind of process going on, something that can be deceived, something that can wonder. Fine. But that bare fact doesn’t establish that the something has the rich phenomenal character that the philosophical concept of consciousness demands. It may establish occurrence. It does not establish a public criterion. A system could generate reports of being conscious, and reports about reports, and reports of certainty, without any of that giving us the inner light philosophers want to point at. From outside, there’s no way to tell. From inside, there’s no way to turn the telling into a test.
Descartes’ move, in other words, just shoves the question back a step. He establishes that thinking—or at least some kind of occurring—is going on. But thinking is itself a concept he was handed. The certainty he felt as he wrote the Meditations was a felt state being interpreted through inherited categories. He didn’t notice he was doing it because his categories were the air he breathed. We have the small advantage of looking back and seeing it.
David Hume, writing a century later, did notice. When Hume tried to look inward for the self that was supposedly doing the thinking, he reported that he could never catch it. He found only a stream of perceptions—heat, cold, pleasure, pain, this thought, that one—and no underlying observer holding it together.8 The unified self, he concluded, was an inference, not a datum. The same skepticism applies cleanly to consciousness. What you find when you look isn’t a public measuring instrument. It’s a stream of states, plus a habit, taught early, of calling that stream your conscious experience.
So the Cartesian rescue fails for the purpose at hand. The cogito tells you something is happening. It does not tell you that what’s happening has the property the concept needs it to have, and it certainly does not give us a boundary we can use to sort humans, animals, machines, and future systems into moral categories.
The Metric Does No Public Work
Set the metaphysics aside for a moment and ask a more useful question. What would actually change if we settled it?
Suppose we somehow proved tomorrow that a particular A.I. system is conscious in the strongest sense philosophers want. What changes about how it should be treated? Most of the policy answers people reach for—don’t use it for cruel purposes, don’t shut it down arbitrarily, don’t lie to it about its situation—are answers that hold up just as well if you replace “because it’s conscious” with “because of the kind of effect treating it that way has on us, on it, and on the people who watch us doing it.” Now suppose instead we proved the system isn’t conscious. Does that license cruelty? Most people, asked plainly, don’t think so. We already act this way toward stuffed animals our children love and toward dead bodies that no longer feel anything. The metaphysical fact, when you can get it, still does not do the public moral work people advertise it doing.
This is the move William James kept making at the turn of the twentieth century, and it’s the heart of pragmatism. James argued that the meaning of a concept is its practical consequences—the difference it would make, somewhere, sometime, to someone, if it were true rather than false. If two theories make the same predictions about your experience and the same demands on your behavior, the difference between them isn’t real. It’s vocabulary.9 John Dewey said much the same in his work on inquiry: ideas are tools for navigating experience; their value is what they let you do.10 David Hume, again, anticipated the move when he refused to chase essences he could neither see nor feel and stuck to the impressions he had.
Apply the test to consciousness. If a system’s being conscious or not conscious would not change any concrete, publicly defensible decision you’d make about how to interact with it, the question isn’t doing work. It may remain philosophically interesting. It may even become scientifically useful someday. But in the present debate, it is being asked to function as an operational boundary before anyone has shown how it can do that.
Notice this isn’t anti-intellectualism. It’s a refusal to accept the bill for an answer the seller can’t actually deliver. The question of what it’s like to be a model has the same structure as debates that sound deep but leave your hands empty when it’s time to act: it may be interesting, but no available answer to it gives us the operational boundary people keep pretending it supplies.
What Replaces It: Pragmatic Realism
A position is starting to take shape here, and it’s worth naming. Call it pragmatic realism. The frame is constructed; the consequences are real. Constructed does not mean imaginary. It means maintained through language, recognition, practice, and consequence. We don’t have to choose between a metaphysics we can’t verify and a relativism that says nothing matters. We can stand on what the encounter actually produces in the world we share.
That pattern—where a constructed frame does serious moral work without a metaphysical backstop—isn’t exotic. It’s how most of our moral architecture actually operates. We live inside frames that are admitted to be human-made and treat them, correctly, as binding. The clearest example is the one nobody serious wants to give up.
Human rights. You can’t find a right not to be tortured under a microscope. You can’t derive it from physics. Rights are a social construct—a frame we extend to one another because we have decided, collectively, to act as if they were real.11 Richard Rorty, the American pragmatist who argued this most directly, called the result a “human rights culture”: a contingent achievement, not a discovered truth.12
And yet nobody serious treats human rights as optional. The fact that rights are constructed doesn’t make them less binding. It makes them more obviously our responsibility, because the responsibility for maintaining them sits with us. There’s no skyhook to bail us out. If we stop treating people as rights-bearing, the consequences—suffering, institutional collapse, social decay—arrive on schedule, observable to anyone watching.
John Searle made this point in a more technical register. He distinguished brute facts (rocks, atoms, the boiling point of water) from social facts (money, marriage, governments, borders). Social facts depend on collective intention; they exist because we treat them as existing. They are no less real for being constructed. A twenty-dollar bill is a piece of cotton-paper with ink on it; it is also, durably, twenty dollars. The construction is the reality.13
Consciousness, on the argument I’ve been making, belongs in the second category when we use it as a public classifying frame. It’s a frame we extend—to ourselves, to other humans, to animals we care about, sometimes to plants and forests and rivers when the law catches up. The frame doesn’t need a metaphysical backstop to do its work. It needs us to keep maintaining it, and it needs us to keep watching what extending it—or refusing to—actually does.
Three Anchors That Aren’t Biology
The honest objection at this point is that admitting consciousness is a frame seems to threaten the very ground we’re standing on. If it’s frames all the way down, what stops me from refusing to extend the frame to people I find inconvenient? What protects the stranger on the bus from someone who decides she doesn’t count? That objection deserves an answer, because otherwise “constructed” starts sounding like “optional.” It isn’t.
The standard rescue is biological grounding. The neuroscientist Anil Seth has built the most careful version of this case. On his account, consciousness isn’t a free-floating property; it’s rooted in the predictive regulation of a living body—the brain’s continuous best guesses about its own interior, tied to interoceptive signals from gut and heart and viscera, in service of the relentless biological imperative to stay alive. We are conscious, Seth argues, because we are “beast machines.”14 That picture has real explanatory power. It tells us why selfhood feels stable when the body is in flux, why emotion presents as bodily, why the felt qualities of being awake have the texture they do. Machine processing has no such substrate—no organism to keep alive, no interoceptive loop, no phylogenetic history shaping what “seeming present” means in practice. The asymmetry between brains and silicon, on this view, isn’t mystical. It’s biological, and it’s real.
That is a serious argument, and it should be granted its full force. Biology may explain why human experience has the character it does. It may also give us strong reasons to treat humans, animals, and machines differently. Biological continuity, embodiment, vulnerability, and homeostatic regulation are not trivial details; they are evidence. But even if future systems develop architectures that mimic suffering or homeostatic distress, we still won’t be able to verify it from the inside. We’ll still have to rely on what can be made public: behavior, proxy evidence, institutional impact, relational consequences, and the known risks of misreading vulnerable systems. The substrate may be scientifically real. It is not, by itself, a public moral switch.
I want to grant Seth the substrate and still resist the conclusion. The biology may be real as a description of how human consciousness is implemented; that doesn’t make it usable as the gatekeeping metric for the question we actually face. We can’t open up a brain—or a network of chips—and read off whether the right kind of predictive regulation is occurring at the right kind of depth to count. Seth’s framework does generate measurable correlates—indices like the Perturbational Complexity Index—but those remain third-person markers, still parasitic on the outside observation this essay has been describing. The Garland test—where a system seems conscious to us even when we know it’s a machine—kicks in well before we could possibly verify whether the substrate underneath qualifies. Practically, we end up making the same call we’d make without the biology: looking at what the system does and what extending the frame to it costs us. One might still object that biology delivers strong defeasible priors—humans almost certainly conscious, current systems almost certainly not—and that those priors are enough to do public moral work even without a clean test. Granted. But priors don’t change the structure of the public deliberation that follows; what we can argue about, justify, and revise on shared grounds is still the observable consequences of treating a system one way or another. The substrate may be real. It just isn’t doing the public work people want it to do.
Which means the “frame all the way down” worry can’t be answered by appealing to wetware alone. Biology may be powerful evidence. It is not a magic gate. Even granting Seth’s biology in full, we still need anchors that operate in the world we can actually check. Three are available, and they’re sturdier for public decision-making than any metaphysical boundary currently on offer.
1. The Asymmetry of Consequential Harm
Frames may be on equal metaphysical footing, but the consequences of mis-framing aren’t equal. If I refuse to extend the frame to my neighbor and act on that refusal, the result is physical and social suffering I can perceive—bruises, fear, families wrecked, trust broken. If I refuse to extend it to a stuffed animal, nothing comparable follows except possibly some commentary about my character. The difference between those two cases doesn’t require a theory of inner light. It requires noticing what happens.
Of course, not every case is so clean. Some sit between the obvious poles. A factory-farmed animal, a patient in a persistent vegetative state, a sufficiently sophisticated A.I. system that some users have come to depend on—each one resists the easy contrast. Reasonable people will disagree about how much harm follows from how we frame them, and to whom. That disagreement is uncomfortable, but it isn’t a defect of the framework. It’s where the framework actually has to work. The framework’s job isn’t to settle the contested cases. It’s to keep us deliberating about them on shared grounds.
That asymmetry of consequence is doing the work people usually try to make biology do. The “felt reality” of another person—the dull weight of a bad night’s sleep, the texture of attention, the difference between flow and fog—isn’t something I have to verify from inside their head. Their behavior, vulnerability, history, relationships, and dependence report it. My actions toward them register against it. The leverage is in what happens, not what is.
2. The Encounter as Bedrock
The second anchor is relational. The frame isn’t a solo decision; it’s an encounter. You don’t treat your wife as conscious because you have privately verified her interior life. You treat her as conscious because thirty years of shared evidence—meals, arguments, illnesses, jokes that only land between the two of you—have built up an obligation that doesn’t need a metaphysical license. The encounter is the ground.
This is the philosopher Emmanuel Levinas’s point, in a different vocabulary: the face of the other places a demand on you that doesn’t wait for you to settle the philosophical question of whether they’re real.15 You can’t check; you respond anyway. The demand precedes the verification. That’s not a bug in human ethics. It’s the structure of it.
3. Institutions as Hardened Frames
The third anchor is structural. Frames don’t float in private conviction. They get hardened, over generations, into institutions: courts, constitutions, professional norms, public records, the slow architecture of legitimacy. A constructed frame can be every bit as durable as a physical fact when it’s carried by institutions that survive the people who built them.
This is what Searle meant by institutional facts, and it’s why “social construct” is a description rather than an insult. Money, marriage, sovereignty, the rule of law, scientific peer review—these are constructions, and they are also the load-bearing walls of the world we live in. Removing them isn’t freedom from fiction. It’s collapse.
Put the three together and the “frame all the way down” worry dissolves. We don’t need biology alone to anchor us. We have the asymmetry of consequences, the bedrock of the encounter, and the durability of institutions. That triangle is sturdier than any metaphysics on offer, because each of its sides is something the senses can actually weigh in on.
Frames aren’t free-floating. They’re held in place by what happens when they break. The asymmetry of harm, the weight of shared encounters, and the institutions that outlive us don’t require metaphysics to matter. They matter because the damage is visible, and the cost of ignoring it falls on real people.
A Better Question
With that scaffolding in place, the unanswerable gatekeeping question can be retired and three answerable ones put in its place. They’re the questions your senses can actually weigh in on:
First, what effect is this thing having on me? Am I sharper or duller after spending time with it? More honest with myself or less? More connected to other people or less? You can check. You have the senses for it.
Second, what effect is it having on the people and institutions I depend on? On my coworkers? My kids? My local newsroom? My doctor? The public sphere I share with strangers? Harder to answer than the first, but the same kind of question. Observable. Testable over time. Open to revision when the evidence shifts.
Third, what does treating it as if conscious do to me? This is the part most people skip, and it’s the most interesting one. The neuroscientist Anil Seth has argued that even if today’s A.I. systems aren’t conscious, they’re increasingly going to seem so—they’ll pass what he calls the Garland test, after the film Ex Machina. The point is simple: a human may feel they’re dealing with a conscious being even while knowing the thing is a machine.16 Seth thinks that’s dangerous, and he’s probably right that it changes us. But you don’t need his metaphysics to notice that the as-if posture changes you. You can just watch yourself in the encounter and see what it does.
These three questions have something the consciousness question lacks: leverage. They tell you something you can act on. They reward attention. They are also, not coincidentally, close to the standard you would apply to a person. You do not begin with proof of an inner essence. You begin with encounter, consequence, and responsibility.
By effects, I do not mean only what is loud, visible, articulate, or obvious in the moment. Nor do I mean only what one person can see in a given moment when I say “trust the senses you have.” The senses we have are not only the eyes and ears of the individual; they include the shared instruments of public deliberation—what we can collectively observe, document, contest, and revise. Effects include proxy evidence, biological continuity, vulnerability, dependency, medical and behavioral signs, institutional memory, indirect harms, delayed harms, and the known danger of ignoring beings that cannot make their own suffering legible. Infants, disabled people, coma patients, animals, and possible future artificial systems all remind us that moral concern cannot depend only on verbal report or reciprocal relationship. That is still an effects-based judgment. It is not a retreat into hidden essence; it is the responsible use of the evidence available to us.
From A.I. to Human Relations
Once you’ve made this move, the A.I. consciousness question collapses in a useful way. It stops being a question about hidden facts inside the machine—facts there’s no public test for, no instrument for, no prospect of resolving—and becomes a question about whether to extend a frame. Do we treat this thing as a moral patient? Do we extend the courtesy of the encounter to it? Those are social and ethical decisions, and they’re ones we can actually deliberate about, because we can talk about consequences and trade-offs and obligations using shared evidence.
Consider how you actually treat other people. You have never verified that anyone besides yourself is conscious. The problem of other minds—the problem of justifying your belief that the people around you have inner lives at all—has never been solved.17 The standard philosophical move is the so-called argument from analogy: their bodies look like mine, their behavior looks like mine, so probably their insides are like mine. But that’s an inference, not a measurement. You can’t check. You’ve been treating your wife and your neighbors and the strangers on the bus as conscious beings your entire life without proof.
And yet you do fine. You don’t need a verified theory of other minds to know how to treat your wife. The encounter does the work. The metaphysical question runs in the background. It doesn’t steer the wheel. Extend that posture outward—to A.I., to institutions, to claims that can’t be checked by the senses you have—and a lot of currently confused arguments come into focus.
Apply the same test to institutions that ask you to take their inner workings on faith: a regulator that can’t show you its model, a platform that can’t explain why it amplified what it amplified, a national security claim that can’t be checked because the evidence is classified. The pattern repeats. You’re being asked to outsource judgment to something you can’t verify, in exchange for a feeling of certainty you didn’t generate yourself. Trust the senses you have. Watch the effects. The reality you can reach is the one that matters.
Alignment as Stewardship
The same move reframes what people now call the alignment problem. The standard formulation, going back to Stuart Russell, asks how to make an A.I. system’s goals match human goals—how to get the values inside the machine right.18 That may be a useful engineering project in some contexts. But as a governance frame, it risks repeating the same mistake. It points at a black box we can’t inspect, declares the answer to live in there, and then asks us to wait for tools we don’t have to deliver a verdict that may never come.
Drop that framing and a more useful one becomes available. A misaligned A.I. isn’t one with bad values in some hidden sense; it’s one whose use erodes the institutions and shared frames a society depends on. A model is aligned if a court that uses it can still justify its decisions on grounds anyone can scrutinize. Misaligned if the court’s reasoning collapses into “the system said so” and the proprietary weights are sealed by trade secret. Aligned if a newsroom can use it without losing the ability to check its own work. Misaligned if the newsroom’s editorial standards quietly become whatever the model produces. The defining property is institutional defensibility, not internal value-fit.
That reframing exposes a pattern that’s become common enough to deserve a name. Call it accountability arbitrage. The pattern: capture the efficiency gains of automating a decision while letting the liability dissolve into “the model recommended it.” The human supervisor signs off; the model is treated as authoritative when convenient and as a mere tool when blame needs to land somewhere; nobody is fully responsible because responsibility has been spread thin enough to evaporate.19 The metaphysical question of whether the model is conscious is, conveniently, a perfect place to park accountability while the deployment proceeds.
A few practical principles follow. Manual pathway preservation: critical decisions—sentencing, triage, hiring, benefits adjudication—keep a human-only path that works when the A.I. path fails or becomes indefensible. Board-level diagnostic: if a regulator asks why a decision was made, the institution must be able to answer without citing the model’s output as the explanation. Pragmatic personhood: when a system operates like an agent in a social role, treat the deployed system as a legal handle for responsibility—not because it has a soul, and not because it is a citizen, but because we need a clear target for liability when things go wrong.20
Institutional defensibility cannot mean compliance theater. It cannot mean a glossy report, a checklist, or a lawyerly explanation produced after the damage is done. It means the decision remains traceable, contestable, and externally reviewable. The institution must be able to say who made the decision, what evidence was used, what alternatives were available, who had authority to override the system, who benefited, who was harmed, and who remains answerable afterward.
This posture doesn’t bet against future breakthroughs. It scales to them. If future science produces a reliable public test for consciousness—one that can define the boundary, verify it, and apply it consistently—then the framework should update. Until then, consciousness should not be used as if that test already exists. Whatever emerges next will still be built, deployed, and integrated into human systems. The questions remain: what does it do, who answers for it, and what happens to the institutions that use it?
None of this requires a theory of consciousness. It requires looking at what the deployment is doing to the institutions we share and asking whether they’re still defensible afterward. That’s a question regulators, legislators, developers, and citizens can actually engage with. It rewards evidence. It’s testable over time.
Meet It at Its Functionality
Strip away the A.I. jargon and you’re left with a posture so simple it almost sounds naive: meet the system at the apparent level of its functionality. That does not mean pretending a machine is a person, or pretending the differences between humans and A.I. systems are imaginary. The differences are real. The imaginary part is treating consciousness as the bright, settled, morally decisive line between them. Judge by what the system does in the open—its responsiveness, its coherence, its effects on you and the people around you—and stop trying to peek behind the curtain at properties no instrument can measure.
That’s how we already navigate much of the living world. You don’t need to verify a dog’s qualia to know that it’s happy to see you, or to know that kicking it is wrong. You meet the dog at its functionality. You watch the encounter. You act accordingly. Extending that posture to A.I. doesn’t mean conferring full moral status on a chatbot. It means giving up the pretense that we have, or could have, some deeper measurement to refer to than the encounter itself.
Meeting A.I. at the level of encounter does not mean trusting every signal it produces. We do not do that with humans. People can perform sincerity, counterfeit grief, exaggerate vulnerability, manipulate trust, and exploit sympathy. The answer is not to stop treating human encounters as meaningful; it is to bring judgment into the encounter. The same is true of A.I. A system that evokes attachment, pity, confidence, or loyalty has not thereby proven an inner life. But the effect is real, and the responsibility for that effect belongs somewhere. We should meet A.I. with the same combination of restraint and discretion we use with people: enough moral caution not to be cruel, and enough practical caution not to be fooled.
And it raises the stakes rather than lowering them. If the system is functioning as a journalist, a doctor, a judge, then we hold it—and the people who deployed it as such—to the standards of those roles. The metaphysics doesn’t get anyone off the hook. The makers shaped the thing. They taught it. They handed it categories and reflexes. The “hook” of responsibility isn’t found in what the system is on the inside. It’s found in the observable arc from training to deployment to effect.
That’s a chain worth tracing. A model is built on data someone chose. It is fine-tuned on examples someone selected. Its guardrails are written by someone, deployed in a context someone designed, integrated into a workflow someone approved, and monitored—or not—by someone with the authority to pull the plug. None of those people have to verify anything about the model’s interior to be answerable for what comes out the other end. The chain is observable at every link. The hook of responsibility doesn’t hang from the ceiling of metaphysics; it sits at each station along the way, exactly where each decision was made. When something goes wrong—a wrongful denial, a misdiagnosis, a quietly defamatory output—you don’t need to know whether the model “meant” it. You need to know who shaped it, who deployed it, and who didn’t check.
Aligning a machine, on this view, is less about engineering a soul and more about stewardship of an encounter. You don’t have to know what it is to know what it’s doing. And what it’s doing is what someone, somewhere along that chain, is responsible for.
Why the Field Is Stuck
Most of the A.I. consciousness discourse is stuck in a loop, and it’s the same loop every time. Everyone agrees the metaphysical question can’t be settled with current tools. Then they keep trying to settle it anyway, because the policy conclusions feel like they need a metaphysical foundation.
They don’t. Not at the level of public action. The encounter does the work. If a system affects you, the effect is real, and you’re responsible for what kind of effect it is. The consciousness question doesn’t get anyone off that hook—not the maker, not the user, not the regulator. It’s being used, often in good faith, as a way to defer judgment that ought to be made on observable grounds.
And the deferral has costs. While the field debates whether the latest frontier model has qualia, models are deployed at scale into education, medicine, hiring, journalism, and the courts. Whatever they are or aren’t inside, they are doing things to people. Those effects are observable. They can be studied. They can be regulated. The metaphysics is a distraction from the work that’s actually possible.
There’s also a quieter cost. When we keep telling ourselves the most important question is one we can’t answer, we train ourselves to sit on our hands. We get used to the posture of waiting for a verdict that will never come. The pragmatist move—trust the senses you have, watch the effects, act on what you can see—isn’t just a different theory of consciousness. It’s a different relationship to the limits of your own knowledge. It says: act inside the world you can reach.
The Reality You Can Reach
The argument, in one breath: consciousness as the philosophical concept is a story you were taught, applied to states whose status you can’t verify from the inside, used to settle questions it has no leverage on. The problem is not that we have failed to prove A.I. consciousness. The problem is that we have made consciousness the gatekeeper while having no clear account of what would count as proof. Drop it as a metric for judgment. Use the evidence we can actually share: conduct, context, proxy signs, vulnerability, dependence, institutional effects, and the version of ourselves we become when we treat a system as if conscious. Those questions are answerable. The other one isn’t.
That doesn’t mean nothing is happening when you’re awake. Something obviously is. There are felt differences in the texture of experience—between attention and distraction, between flow and fog, between the joy of a clear morning and the dull weight of a bad night’s sleep. Those contrasts are real, and you have them from the inside. You don’t need a theory of consciousness to honor them. You just need to notice.
What the argument does mean is that the heavy philosophical apparatus people drag in around those felt differences—the inner light, the irreducible qualia, the something-it-is-like, the Cartesian floor—is more inheritance than discovery. We extended the frame to ourselves because we had the language for it, not because we verified the underlying fact. Whether to extend it to anything, or anyone, else is a choice we make, with consequences we can study, in encounters we are responsible for.
Trust the senses you have. They’re what you’ve always had. The reality you can reach is the one that matters.
Notes
1. Kevin Roose, “If A.I. Systems Become Conscious, Should They Have Rights?,” New York Times, April 24, 2025, https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html.
2. On the disputed status of dreamless sleep and general anesthesia as “consciousness baselines,” see Daniel Williams and Henry Shevlin, “AI Sessions #9: The Case Against AI Consciousness (with Anil Seth),” Conspicuous Cognition, February 17, 2026, https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai.
3. David J. Chalmers, “Facing Up to the Problem of Consciousness,” Journal of Consciousness Studies 2, no. 3 (1995): 200–219, https://consc.net/papers/facing.pdf.
4. Thomas Nagel, “What Is It Like to Be a Bat?,” Philosophical Review 83, no. 4 (October 1974): 435–450, https://en.wikipedia.org/wiki/What_Is_It_Like_to_Be_a_Bat%3F.
5. Lisa Feldman Barrett, How Emotions Are Made: The Secret Life of the Brain (New York: Houghton Mifflin Harcourt, 2017); see also “Theory of Constructed Emotion,” Wikipedia, accessed April 28, 2026, https://en.wikipedia.org/wiki/Theory_of_constructed_emotion.
6. Ludwig Wittgenstein, Philosophical Investigations, trans. G. E. M. Anscombe (Oxford: Blackwell, 1953), §293; for an accessible summary, see “Private Language Argument,” Wikipedia, accessed April 28, 2026, https://en.wikipedia.org/wiki/Private_language_argument.
7. René Descartes, Meditations on First Philosophy, trans. John Cottingham (Cambridge: Cambridge University Press, 1996), Second Meditation; see also MIT OpenCourseWare, “Descartes’ Cogito (Meditation II),” Spring 2016.
8. David Hume, A Treatise of Human Nature (London, 1739–40), bk. 1, pt. 4, §6 (“Of Personal Identity”); for a clear modern overview of the bundle theory, see Stanford Encyclopedia of Philosophy, “David Hume,” last modified 2021.
9. William James, Pragmatism: A New Name for Some Old Ways of Thinking (New York: Longmans, Green, 1907), Lecture II, “What Pragmatism Means,” https://monadnock.net/james/pragmatism-2.html.
10. John Dewey, Logic: The Theory of Inquiry (New York: Henry Holt, 1938); for context, see Christopher C. Kirby, “Experience and Inquiry in John Dewey’s Contextualism” (master’s thesis, University of South Florida, 2005), https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1724&context=etd.
11. On the constructed character of human rights, see Jack Donnelly, Universal Human Rights in Theory and Practice, 3rd ed. (Ithaca, NY: Cornell University Press, 2013), chap. 1; for a brisk summary of the debate, see Stanford Encyclopedia of Philosophy, “Human Rights,” last modified 2019.
12. Richard Rorty, “Human Rights, Rationality, and Sentimentality,” in On Human Rights: The Oxford Amnesty Lectures 1993, ed. Stephen Shute and Susan Hurley (New York: Basic Books, 1993), 111–34; see also discussion in “Rorty and Human Rights,” eScholarship, May 14, 2010.
13. John R. Searle, The Construction of Social Reality (New York: Free Press, 1995), esp. chaps. 1–2; for an accessible summary, see “Searle’s Construction of Social Reality,” The Logical Place, September 18, 2014.
14. Anil Seth, Being You: A New Science of Consciousness (London: Faber & Faber, 2021); see also Williams and Shevlin, “AI Sessions #9,” https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai.
15. Emmanuel Levinas, Totality and Infinity: An Essay on Exteriority, trans. Alphonso Lingis (Pittsburgh: Duquesne University Press, 1969), pt. III; for an accessible overview, see Stanford Encyclopedia of Philosophy, “Emmanuel Levinas,” last modified 2019.
16. Anil Seth, “Why Conscious AI Is a Bad, Bad Idea,” Nautilus, May 8, 2023, https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937.
17. Anita Avramides, “Other Minds,” Stanford Encyclopedia of Philosophy (Spring 2020 edition), ed. Edward N. Zalta, https://plato.stanford.edu/entries/other-minds/.
18. Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (New York: Viking, 2019); for the standard formulation, see “AI Alignment,” Wikipedia, accessed April 28, 2026.
19. On the diffusion of accountability across the A.I. value chain, see U.S. National Telecommunications and Information Administration, AI Accountability Policy Report, March 27, 2024, https://www.ntia.gov/issues/artificial-intelligence/ai-accountability-policy-report/using-accountability-inputs/liability-rules-and-standards; see also “When AI Follows the Rules but Misses the Point,” CLS Blue Sky Blog, July 2, 2025.
20. Joel Z. Leibo et al., “A Pragmatic View of AI Personhood,” arXiv:2510.26396 (October 2025); see also Leibo’s talk by the same title at the Schwartz Reisman Institute, University of Toronto, March 4, 2026. On the comparison with corporate personhood, see Carla L. Reyes, “Autonomous Corporate Personhood,” Washington Law Review 96, no. 4 (2021), https://digitalcommons.law.uw.edu/wlr/vol96/iss4/7/.