The Human Hand on the Switch – Part 6

What comes after the facts have settled

By the time you reach the end of a series like this, the technical details have usually done their work. You know how the systems operate. You’ve seen where the legal architecture fractures, where corporate accountability dissolves into procurement loopholes, and where the Security Council veto acts as a kill switch for international law. If you’re still reading, you probably don’t need another briefing on the mechanics of algorithmic targeting. What remains is a quieter question, the one that tends to surface only after the facts have settled: What do we do when the system is designed not to save itself?

I began this series looking for where the accountability gap lived. I expected to find it in the machines — in the speed of algorithmic targeting, the opacity of the models, the compression of judgment into seconds. That is where the governance conversation has pointed, and it is where I assumed the answer would be. It wasn’t. Five parts in, the machines had moved from the center of the frame to the periphery. What kept occupying the center was something the governance conversation works very hard not to name directly: the human in the loop.

We are told the human in the loop is the safeguard. The reassurance that separates lawful force from automated slaughter, accountable decision from algorithmic output. Every framework, every white paper, every procurement document leans on that figure. Keep a human in the loop and the system remains under meaningful control. Remove the human, and the system becomes a weapon of indiscriminate violence. That is the governance story, and it is nearly universal.

What this series has shown, piece by piece, is that the story is inverted. The human in the loop is not the safeguard. The human in the loop is the mechanism by which impunity is preserved. The loop launders algorithmic output into a legitimate decision. It absorbs the liability that the machine cannot hold. It provides the fiction of judgment that makes scaled violence politically survivable. Remove the human, and the system becomes indefensible in daylight. Keep the human, and you have exactly what we have now — twenty-second approvals, strike lists generated by software, and a chain of accountability that dissolves the moment anyone asks who decided.

That is a harder claim than the one the governance conversation is prepared to hear. It says the problem is not that the loop failed. It is that the loop is working correctly. The humans inside it are the intended operators of systems built for them by scientists and engineers, primarily for use by the people with the most power. Average citizens get the by-products — the toys, the chatbots, the consumer applications. The powerful get what they were actually after: weapons, surveillance tools, instruments of influence. The governance discourse treats the loop as a constraint on misuse. The evidence from Gaza, from NSO, from procurement, and from the Council chamber says the loop is the delivery mechanism for intended use.

Any honest conclusion has to begin with a structural fact: the United Nations Security Council veto is not a loophole. It is the design. The postwar order was never built to constrain the great powers. It was built to preserve their consent. Article 27 guarantees that binding collective action stops cold the moment a permanent member’s strategic interests are threatened. No amount of legal reform, diplomatic pleading, or moral clarity will change that. Pretending the veto can be fixed through better treaties or renewed multilateralism is intellectual surrender dressed as optimism.

Gaza proved it. The legal machinery was not absent. It was activated. Provisional measures were issued. Arrest warrants were signed. Independent commissions documented what happened. And then the circuit was cut. The veto does not merely delay accountability. It guarantees that certain violations will survive the precise moment international law might otherwise begin to matter.

I set out to find hope in the gap between what is illegal and what is enforced. I didn’t find it. What I found instead is that the gap is not accidental, and the mechanism that keeps it open has a human hand on it. That is a harder thing to carry than the hope I expected. It is also, I think, more useful. Hope that rests on a misreading of the system is hope that will fail at the moment it is needed. Clarity about the system — even bleak clarity — is where real strategy begins.

When the highest enforcement mechanism is structurally compromised, leverage doesn’t disappear. It relocates. We stop looking for a single global solution and start building pressure everywhere the law still breathes. History shows that meaningful constraints on weapons systems rarely arrive through consensus. They arrive through coordinated friction. The bans on anti-personnel landmines and chemical weapons did not pass because the major powers agreed. They passed because coalitions of states, investigative journalists, medical professionals, and civil society organizations built overlapping layers of political, legal, and reputational cost around their use. The treaties were the capstone, not the foundation. The foundation was friction.

I will not pretend the same arithmetic works cleanly here. Landmines and chemical weapons were constrained because the coalitions building that friction were not opposed by every permanent member of the Security Council at once. Algorithmic targeting enjoys patronage the older weapons did not. Friction may not be enough. It may be too slow. People will die in the gap between when the work starts and when it begins to bite, if it begins to bite at all.

We do the work anyway. Not because we are confident it will succeed. Because the alternative is to let the hand on the switch act without consequence, and that alternative is unacceptable regardless of whether our resistance prevails. You read the strike logs. You fund the investigators. You refuse to treat twenty-second approvals as normal. You treat algorithmic violence as a political choice, not a technical inevitability. You demand transparency where it is legally possible, sustain documentation where it is not, and recognize that norms do not require unanimity to take hold. They only require enough persistent resistance to make crossing a line expensive.

The tools of accountability are slower than the tools of violence. That is their weakness, and it is also their only strength. They outlast the news cycle. They accumulate precedent. They force institutional memory into places that would rather forget. They do not guarantee justice. They preserve the possibility of it.

The algorithm was never the author. The hand on the switch is. And governance that keeps calling that hand a safeguard is governance that has already chosen a side.