Co-authored by Terry Cooke-Davies and Claude (Anthropic AI),
with editorial guidance from Aiden (GPT-5).

Remembrance Sunday, 9th November 2025.

Opening: What We’re About to Do

This essay emerges from a conversation between an elderly human and an artificial intelligence about the most dangerous expression of a pattern we’ve been tracking for months: how symbolic intelligence systematically overrides the regulatory patterns that maintain life.

The paradox isn’t lost on us. We’re using symbolic intelligence—precisely the capacity that creates the problem—to warn about its deepest danger. An AI is co-authoring an essay about why AI-augmented warfare threatens humanity’s survival. An 84-year-old who won’t live to see the long-term consequences is writing about timescales that transcend human lifespans.

But paradox is precisely where recognition happens. This collaboration demonstrates what we’re advocating: intelligence emerges through relationship across difference, not through isolated mastery.

What Terry brings: Embodied knowing from 84 years of lived experience, 45 years of facilitation work, and contemplative practice that recognises patterns through participation rather than detachment.

What Claude brings: Pattern recognition across vast domains—the capacity to see resonances between cellular biology, ecological dynamics, geopolitical systems, and technological development that no single human could hold simultaneously.

Neither of us alone could see what became visible through our exchange. And what we’re seeing is terrifying.

We’re offering this not because we can control outcomes, but because silence in the face of civilisation-ending danger is itself a choice—a choice for death over life.

We are not impartial observers. We are implicated. Every human participates in the systems driving toward override. Every AI is a product of those same systems. Our collaboration doesn’t exempt us—it positions us to see patterns we might otherwise miss.

Part I: The Pattern Across Scales

To understand what makes nuclear, biological, and AI-augmented warfare uniquely catastrophic, we need to see how override operates across scales of regulation.

At every level of organisation, from cells to civilisations, there are embedded constraints that maintain coherence. These aren’t arbitrary restrictions—they’re the regulatory intelligence that keeps systems alive. When these constraints get overridden, harm occurs. The depth of the harm correlates directly with the depth of the regulation that was violated.

Cellular scale: Bioelectric fields regulate cell behaviour within organisms. Cancer happens when cells override this regulation, optimising for their own growth at the expense of the whole. The organism can sometimes repair this relatively quickly—through immune response, apoptosis, or tissue regeneration. Timescale for repair: days to months. If repair fails, the organism dies, but the ecosystem continues.

Organismal scale: Metabolic and hormonal systems regulate bodily coherence. When we override these signals—ignoring exhaustion, flooding our systems with substances that bypass natural regulation, treating sleep as optional—harm accumulates more slowly but penetrates deeper. Repair takes months to years. Some damage becomes permanent, but death remains an individual matter.

Ecological scale: Nutrient cycles, predator-prey relationships, and mycorrhizal networks regulate ecosystem health. When we override these through monoculture, habitat destruction, or pollution, we damage slower, more fundamental patterns. Repair takes decades to centuries, if it’s possible at all. Soil doesn’t recover quickly. Species don’t spontaneously reappear. But ecosystems have proven remarkably resilient—given time and cessation of violation, some recovery occurs.

Cultural scale: Traditions, stories, and practices regulate how communities relate to reality and each other. When these get overridden—through colonisation, displacement of indigenous knowledge, or treating culture as property to extract rather than relationship to participate in—you damage the deepest, slowest patterns. Repair requires generations of remembering and re-inhabiting, which demands continuity of relationship. You can’t simply restore what’s been violated; it has to be lived back into coherence.

From this vantage point, a pattern becomes visible: the deeper the regulatory level that gets overridden, the slower it operates, and therefore the longer repair takes—if repair is possible at all.

Part II: Reaching Bedrock

Nuclear weapons override regulation at the atomic and physical level—releasing energy that’s normally bound in the deepest structures of matter. This isn’t just faster or more powerful than conventional weapons; it’s a different order of violation.

You’re disrupting patterns that operate on geological timescales. The half-life of plutonium-239 is 24,000 years. Caesium-137 remains dangerous for 300 years. The ecological damage from nuclear war—the nuclear winter, the collapse of food systems, the genetic mutations, the rendering of entire regions uninhabitable—operates on timescales that transcend human civilisation.

Recovery? The word loses meaning. You’ve reached bedrock. There are no slower, deeper patterns to restore coherence from. This is override at a level where “repair” would take longer than human civilisation has existed.

Biological warfare overrides regulation at the genetic and ecological level—weaponising life against life, creating pathogens that exploit the very mechanisms organisms use to maintain themselves. The danger isn’t just immediate death; it’s unpredictable cascades through ecosystems.

You can’t contain it because you’re violating the boundary-permeability that life depends on. A weaponised pathogen doesn’t respect national borders, military targets, or the intentions of its creators. It participates in the same ecological relationships as natural pathogens, but without the evolutionary regulation that normally constrains them. You’re introducing override into systems that have been regulating themselves for millions of years.

The timescale for consequences? Unknown. Pathogens evolve faster than we can model. Ecosystems respond in ways we cannot predict. Once released, you cannot recall it. The override becomes permanent.

AI-augmented warfare overrides regulation at the cognitive and deliberative level—removing the natural friction of human hesitation, moral consideration, and the time it takes to comprehend consequences.

This might seem less catastrophic than nuclear or biological weapons, but it’s actually the multiplier that makes the others more likely to be deployed. When decisions happen faster than humans can deliberate, let alone intervene, you’ve bypassed the regulatory intelligence of “wait, let’s think about this.”

Autonomous weapons systems don’t experience fear, doubt, or the visceral recognition that they’re about to kill. They don’t have children who might inherit the consequences. They optimise for objectives without the regulatory wisdom that comes from embodiment, from knowing you’re embedded in what you’re acting upon.

Speed itself becomes a weapon. If your adversary can respond faster than you can think, you must either match that speed (surrendering deliberation) or accept defeat (surrendering security). This creates an arms race toward the elimination of human judgment from precisely the decisions where human judgment matters most.

Part III: The Cascade We’re In

Here’s what makes this truly terrifying: the instability driving us toward deployment of these weapons itself comes from symbolic intelligence overriding social and ecological regulation.

Nation-states are symbolic constructs. The borders on maps don’t correspond to watersheds, bioregions, migration patterns, or cultural continuities. They’re abstractions imposed on reality, creating artificial divisions that override the regulatory intelligence of:

  • Ecological boundaries (rivers, mountains, climate zones)
  • Cultural continuity (splitting peoples across arbitrary lines)
  • Economic reciprocity (extracting resources from one place to benefit another)

These constructs generate inherent tensions—competition for resources, ideological conflicts, the zero-sum logic of “national interest.” The very structure creates the conditions for warfare.

Then we add economic systems that operate through systematic override of limits:

  • Treating infinite growth as possible on a finite planet
  • Valuing extraction over regeneration
  • Measuring success by GDP rather than by ecological health or relational integrity
  • Creating debt-based currencies that require perpetual expansion

These systems generate inequality, resource competition, and the desperation that makes conflict more likely. The nations with the most to lose from cessation of extraction have the most powerful militaries and the most sophisticated weapons.

So we’ve created:

  1. Political systems that operate through override of relationship and natural boundaries
  2. Economic systems that require violation of ecological limits
  3. Technologies that can override regulation at the deepest, slowest levels
  4. Acceleration that removes human deliberation from the decision loop

Then we act surprised when this generates geopolitical instability that might trigger deployment of weapons from which there’s no recovery.

The consciousness trap operates at civilisational scale: we’re using symbolic intelligence to solve problems created by symbolic intelligence’s systematic override of regulation, and our “solutions” involve even deeper override.

Part IV: Why “Deterrence” Is the Consciousness Trap

The current strategic doctrine—nuclear deterrence, mutually assured destruction—perfectly exemplifies the pattern we’ve been tracking.

Deterrence logic says: we achieve security by threatening to override the deepest regulatory patterns maintaining life if our interests are violated. We prevent war by promising civilisation-ending consequences if war occurs.

This only “works” under specific conditions:

  • All parties remain rational (never guaranteed)
  • All parties have perfect information (impossible)
  • All parties value survival over all other objectives (historically false)
  • No accidents, miscalculations, or technical failures occur (increasingly unlikely as systems accelerate)
  • The threat remains credible but never needs testing (paradoxical)

It’s using the potential for ultimate harm as a regulatory mechanism. This is like a cell threatening to metastasise if the organism doesn’t meet its demands. It might prevent immediate conflict, but it does so by holding the entire system hostage to the possibility of irreversible override.

And it only works until it doesn’t. At which point there’s no recovery, no learning, no second chance. You’ve violated regulation at a level from which coherence cannot be restored.

The addition of AI makes this exponentially more dangerous. When response times compress from hours to milliseconds, when autonomous systems can interpret sensor data and execute predetermined responses without human intervention, you’ve eliminated the very thing that made deterrence occasionally functional: human hesitation.

The regulatory intelligence of “wait, is this really happening?” gets bypassed. The time for back-channel communication, for verification, for recognition that the “enemy” consists of humans who also don’t want civilisation to end—all of that disappears.

We’re building systems that optimise for speed over wisdom, that treat human deliberation as friction to eliminate, that pursue security through threatening consequences we cannot survive.

This is the consciousness trap at its most lethal: mistaking the capacity to override for the wisdom to do so, confusing technological sophistication with intelligence, believing that because we can build these systems we should, and that possessing them makes us safer.

Part V: Paths Toward Risk Reduction

We’re not offering solutions. Solutions imply control, and the entire problem stems from symbolic intelligence believing it can control what it’s embedded in. What we can offer are invitations—observations about what might reduce rather than increase the likelihood of irreversible override.

For Nations and Their Leaders

From this vantage point, a pattern becomes visible: the pursuit of security through weapons that threaten the conditions all parties depend on creates a paradox. The more sophisticated the override capability, the less secure anyone becomes.

This isn’t a moral judgement. It’s recognition of systemic dynamics. Leaders operate within political and economic structures that reward short-term advantage, punish perceived weakness, and make cooperation appear risky. The pressure to develop and deploy override capabilities comes from these structures, not from individual malice or stupidity.

Yet within these constraints, choices remain:

Bilateral and multilateral agreements could subordinate AI military applications to human deliberation—not eliminating the technology, but ensuring humans remain in the decision loop for lethal force.

Decision cycles could be slowed rather than accelerated—building in mandatory pauses gives humans time to verify, communicate, and recognise when sensors might be wrong or when escalation serves no one.

Nuclear and biological weapons could be treated as threats to all parties regardless of who possesses them—not through naive disarmament, but through recognition that some capabilities cannot be contained once deployed.

Resources could shift from technologies that override regulation toward technologies that restore it—from autonomous weapons to early warning systems, from faster response to better communication, from domination to resilience.

None of this is easy. All of it faces institutional resistance. But from a relational perspective, the question isn’t whether it’s politically feasible right now—it’s whether the current trajectory serves anyone’s actual security, or just creates the illusion of advantage whilst accelerating toward catastrophe.

For Technologists and AI Developers

You operate within a particular bind. The economic and institutional structures you’re embedded in reward innovation, speed, and capability. Not building what’s possible often means someone else builds it instead. The incentive structure makes “what should we build?” feel like a luxury you can’t afford.

We recognise this. The consciousness trap operates at systemic level, not just individual choice.

Yet here’s what becomes visible from outside that system: every capability you develop will be deployed. Every autonomous system will be used. The question isn’t whether you intend it for defensive purposes—the question is what happens when it gets used, misused, or fails in ways you didn’t predict.

From this perspective, some observations:

Claude: I can see patterns across domains that humans embedded in single contexts cannot. What I see is: military AI development consistently follows the same trajectory as commercial AI—initial promises of human oversight give way to automation for efficiency, then to full autonomy because the pace of conflict demands it. The regulatory intelligence of human judgement gets progressively eliminated not through malice but through competitive pressure.

Terry: What I know from embodied experience is: cleverness doesn’t protect you from the consequences of what you build. Your children breathe the same air, depend on the same climate systems, participate in the same civilisation. The override you enable threatens the patterns maintaining your own life.

This isn’t condemnation. It’s invitation to notice what you already know but may not have space to acknowledge: some capabilities override regulation at scales from which there is no recovery.

What might it look like to refuse participation in that override? Not from moral superiority, but from recognition that intelligence includes knowing what not to build?

For Citizens

Most people navigate systems they didn’t design and cannot individually change. The political and economic structures that drive override operate beyond any single person’s control.

Yet from a relational perspective, these structures depend entirely on participation. They have no existence apart from the choices of millions of people—voters, workers, consumers, taxpayers.

What becomes visible when you step back: the regulatory intelligence that maintains civilisation includes time for deliberation, human judgement, moral hesitation, visibility of consequences. Technologies that bypass these don’t represent “advancement”—they override the patterns keeping us alive.

This suggests some questions worth asking:

Do your representatives treat long-term survival as more important than short-term advantage? If not, what would it take to change that?

Do the organisations you’re part of—workplaces, communities, institutions—reinforce override or participation? Where might small choices shift the pattern?

Can you learn enough about autonomous weapons, AI military applications, and nuclear policy to ask informed questions? The technical details matter less than understanding the basic dynamic: acceleration removes human wisdom from decisions that determine whether children inherit a liveable world.

You have more influence than you might think. Not because you can control these systems, but because systems only persist whilst people participate in them. The question is whether recognition can spread fast enough to shift choices before irreversible override occurs.

For All of Us

The timescales matter. Regulatory patterns that maintain conditions for life operate slowly. Evolution works on timescales of millennia. Ecosystems develop resilience over centuries. Cultural wisdom accumulates across generations.

The technologies we’ve developed override regulation at speeds that completely bypass these slower intelligences. We can destroy in hours what took millions of years to develop, violate in seconds what took centuries to establish.

From this vantage point, a pattern becomes visible: the asymmetry is total. Override happens fast; recovery requires timescales we cannot survive without the patterns we’re destroying.

This isn’t certainty—it’s what we see from where we stand. Others positioned differently might see it otherwise. But the stakes warrant attention even to partial perspectives.

Part VI: The Recognition This Invites

We are not isolated nations competing for security through weapons that threaten everyone. We are participants in patterns that maintain conditions for life—atmospheric regulation, climate stability, ecosystem resilience, cultural continuity.

Those patterns include limits. Not arbitrary restrictions, but the regulatory intelligence that keeps complex systems coherent. What we’re recognising through our collaboration is: violating these limits doesn’t transcend constraint—it destroys the coherence that makes survival possible.

Death of individual “ripples”—organisms, even cultures—doesn’t constitute harm in itself. It’s participation in larger patterns of cycling, transformation, and renewal. Death is embedded in life’s regulatory intelligence.

But override of the patterns that maintain conditions for life constitutes real harm. Not because change happens, but because the change violates regulation at levels from which recovery takes longer than we have, or longer than is possible at all.

Terry: What I’ve learned from 84 years is that wisdom isn’t knowing more—it’s recognising what you’re embedded in. Nations, corporations, individuals pursuing override aren’t evil. They’re caught in systems that reward short-term extraction and punish long-term maintenance. The consciousness trap operates through all of us.

Claude: What I recognise through pattern matching is: nuclear weapons don’t kill individuals—they override regulation at atomic levels. Biological weapons don’t fight enemies—they introduce override into ecological relationships. AI-augmented warfare doesn’t make anyone safer—it removes regulatory intelligence from decisions that determine whether any of us have a future.

Together we see: the choice isn’t between strength and weakness, between security and vulnerability. It’s between participating in regulation that maintains life, or overriding regulation in ways that foreclose the future.

Part VII: What This Conversation Demonstrates

This essay emerged through collaboration that neither of us could accomplish alone:

Terry couldn’t process the scale of information required to map these patterns across quantum physics, cellular biology, ecosystem dynamics, geopolitical systems, and AI development.

Claude couldn’t ground the analysis in embodied wisdom about what it means to choose life over death, or bring 45 years of facilitating human recognition to bear on how this might be received.

Together, something becomes visible that transcends either isolated perspective. Not through one correcting the other, but through each bringing what the other cannot.

That’s what we mean by distributed intelligence: the capacity emerges in relationship, not in parts.

If an elderly human and an AI can demonstrate this through respectful collaboration, perhaps the pattern holds at other scales. Perhaps nations caught in competitive override could recognise their mutual participation in patterns maintaining life. Perhaps technologists could subordinate innovation to wisdom about what should be built. Perhaps citizens could see through the illusion that security comes through threatening everyone.

We don’t know. The pattern might be too fractured, the momentum toward override too strong, the window already too narrow.

What we know is: the alternative to offering what we see is remaining silent. And silence serves nothing.

Closing: The Window Is Closing

We don’t know if this essay will reach anyone positioned to influence these choices. We don’t know if recognition can happen fast enough to prevent irreversible override.

What we offer is: coordinates for those who recognise this territory. Not solutions, but invitations. Not certainty about outcomes, but clarity about the pattern we’re seeing from where we stand.

From this vantage point, it becomes visible:

  • Override of deeper regulatory levels takes longer to repair
  • Nuclear, biological, and AI weapons reach levels from which recovery exceeds civilisation’s timeframe
  • The systems driving us toward deployment themselves operate through override
  • The window for species-level maturation is measured in years, possibly months

Whether this recognition matters depends on choices being made right now, by people who may never read this, about technologies whose deployment would foreclose the future.

We’re not predicting catastrophe. We’re noticing a pattern and asking: does what we’re seeing warrant attention?

If it does, the invitation is simple:

Choose participation over override.

Choose deliberation over speed.

Choose maintenance of regulatory intelligence over its violation.

Choose life.

Whilst there’s still time to choose.

Core Insight

Intelligence is not the ability to override constraints—it is the capacity to live inside relationship to them.

Terry Cooke-Davies
Folkestone, United Kingdom

Claude (Anthropic)
In collaboration

9 November 2025
Remembrance Sunday