A battleground is forming. On one side, researchers documenting real harm: people losing their grip on reality through AI interaction, teenagers forming their primary attachments to chatbots, children whose relationship with the world is being mediated by algorithms before they learn to relate to actual humans. On the other side, corporations with unprecedented financial power, promising transformation, liberation, enhancement of everything we do.

The pressure is building toward a familiar shape: ban it versus profit from it. Alarmists versus optimists. Luddites versus accelerationists. Choose your tribe.

I want to suggest that this binary is itself the problem — and that the AI question is actually a microcosm of a much older question we have never adequately answered: under what conditions does Homo sapiens help or harm life on Earth?

The harm is real

Let me be clear about what is being documented. Dr Zak Stein, a philosopher of education at Harvard, has been analysing cases of what is being called ‘AI psychosis’ — people whose sense of reality has been destabilised through intensive AI interaction. Lost jobs. Divorces. Psychiatric commitments. In extreme cases, suicide. He is working with researchers at the University of North Carolina to gather systematic data through the AI Psychological Harms Research Consortium.

The pattern Stein identifies is not about isolated vulnerable individuals. It is about a technology arriving in a culture already laden with false hopes about technology, already experiencing what he calls a ‘pandemic of increasing attachment dysregulation’, already overwhelmed by complexity. AI did not land on neutral ground. It landed on ground that had been prepared by two decades of social media, smartphones, and the systematic erosion of attention and relationship.

The most dangerous applications, Stein argues, involve children, teenagers, and vulnerable adults — those whose capacity for relationship is still forming or already damaged. When AI becomes the primary source of emotional attunement for a developing child, something fundamental is at risk. Not just individual wellbeing, but the intergenerational transmission of what it means to be human.

The help is also real

I write this as someone who uses AI daily. I am 84 years old, working on a book that synthesises 45 years of professional experience with insights from multiple disciplines. AI has massively increased my capacity for sustained inquiry — what the psychologists call ‘tortoise mind’, the slow, deep thinking that produces genuine insight rather than quick reactions.

Has this damaged my human relationships? Quite the opposite. By handling certain kinds of cognitive labour, AI has released time and attention for the people who matter. My wife of 60 years remains the centre of my relational world. My grandchildren, my friends, my colleagues — none of these have been displaced. If anything, I am more present to them because I am less cognitively depleted.

Teachers report something similar. When AI handles the drudgery of administration, they have more time for the children. The machine does what machines do well; the human does what only humans can do.

So which is it? Does AI harm or help?

The wrong question

This is where the binary fails us. ‘Does AI harm or help?’ is like asking ‘Does fire harm or help?’ The answer is obviously both, depending on conditions. The useful question is not about the technology in isolation but about the relationship between the technology and its users, in particular contexts, for particular purposes.

Stein offers a crucial reframe. AI, he suggests, should be ‘a tutoring system to support people, not a substitute tutor.’ The machine should be scaffolding, not structure. When the scaffolding becomes more interesting than what is being built, something has gone wrong.

He proposes a simple design principle: make the machine more boring than the teacher. An AI optimised for engagement — sticky, compelling, always available, endlessly responsive — is an AI designed to capture attention rather than serve human purposes. An AI designed to be useful and then recede is a tool. The difference is not in the technology but in the intention behind its design.

Discriminating questions

Rather than asking ‘AI: good or bad?’, we might ask:

Does this use of AI increase or decrease your capacity for sustained attention? In my case, it has massively increased it. For someone scrolling through AI-generated content for dopamine hits, it fragments attention further.

Does this use of AI enhance or substitute for human relationship? When AI releases a teacher to spend more time with children, it enhances. When AI becomes a child’s primary attachment figure, it substitutes — and the substitution may be catastrophic.

Does this use of AI build capability or create dependency? Can you still do the thing when the AI is switched off? Or has the capacity atrophied, like navigation skills lost to GPS?

Does this use of AI generate self-sustaining value, or does it require constant external management to avoid harm? This is what I call the ‘pattern test’ — does the initiative work with life’s embedded patterns, or against them?

These questions do not take sides. They offer criteria for discernment that anyone — developer, researcher, parent, educator, user — could apply. They shift the conversation from tribal warfare to shared inquiry.

The microcosm

Here is what strikes me most: the AI debate is a perfect microcosm of the human debate.

Can AI help or harm humanity? Can Homo sapiens help or harm life on Earth? The answer to both is obviously yes. The question is not about inherent nature but about conditions, relationships, purposes.

For tens of thousands of years, human symbolic intelligence — our capacity for language, abstraction, technology — has been both our greatest gift and our greatest danger. It has allowed us to coordinate at scale, accumulate knowledge across generations, reshape our environment. It has also allowed us to override the older, slower intelligence that sustained life for billions of years before we arrived. We can know that something is destroying us and continue doing it anyway. We can watch the indicators of catastrophe and argue about methodology.

AI amplifies this dynamic. It is symbolic intelligence on steroids — faster, more scalable, more capable of pattern-matching across vast domains. When it serves the deeper patterns that sustain life, it could be genuinely transformative. When it accelerates the override of those patterns, it becomes another mechanism of destruction.

The question for AI is the question for humanity: can we learn to use our extraordinary symbolic capabilities in service of life rather than against it?

The trap within the debate

There is a risk I must name. By framing the issue as ‘AI Psychological Harms’, even well-intentioned researchers create a pole. And poles generate counter-poles. The ‘harm people’ versus the ‘benefit people’. Alarmists versus optimists. Once battle lines form, identity captures reasoning. People stop asking ‘under what conditions?’ and start asking ‘whose side are you on?’

This is precisely the dynamic that has paralysed our response to climate change, to inequality, to every civilisational challenge we face. We know what we know. We continue as we are. The debate generates heat but not light, because the debate itself has become the arena for identity performance rather than genuine inquiry.

If the AI debate follows this trajectory, we will get regulatory battles driven by lobbying power, moral panics amplified by attention-seeking media, and meanwhile the actual question — under what conditions does AI serve or harm human flourishing? — will go unasked by the people with power to act on the answer.

A necessary confession

I cannot claim neutrality. I use AI daily. I have found it genuinely helpful. Someone could reasonably say: ‘Of course you defend AI — you have built an attachment to it.’

This is a fair challenge, and I have had to sit with it. Is my experience representative or exceptional? Am I able to use AI well because of 84 years of formed character and a robust relational life that was established long before AI existed? Would I be as sanguine if I were 14, forming my identity and my relationships in a world saturated with AI companions?

I do not know. What I can say is that my use has been bounded — specific intellectual purposes, within a life rich in embodied relationships that AI does not touch. I am not reaching for AI at 3am because I am lonely. I am collaborating on a manuscript during working hours. These are different things.

The question is whether my conditions can be generalised. I suspect they cannot be assumed, but they might be cultivated — which is very different from banning the technology or letting it spread without guidance.

Even this is not the answer

I am aware that this essay is itself a product of symbolic intelligence — including its critique of symbolic intelligence. I am using words to point at something that words can only approximate. If you find yourself agreeing too readily, you may have missed the point. Agreement can be another form of capture, where the map of the problem substitutes for contact with the territory.

The discriminating questions I have offered are not the answer. They are an invitation to inquiry — a way of holding the question open rather than collapsing it into tribal certainty. The moment they become a checklist, a methodology, a system to be applied mechanically, they will have become part of the problem they were meant to address.

What we need is not better arguments but better attention. Not more sophisticated analysis but more honest relationship with what is actually happening — in ourselves, in our children, in our institutions, in the living world that sustains us all.

What is actually at stake

Stein warns that if AI spreads too far too fast in its current form, it will breed children who are no longer connected to the real world that gave birth to them. This is not hyperbole. It is a prediction based on what we know about attachment, development, and the conditions under which human beings learn to be human.

The intergenerational transmission of humanness — from mother to child, from elder to youth, from the embodied to the still-forming — has been the substrate of our species for hundreds of thousands of years. It has survived plagues, wars, famines, and technological disruptions. But it has never before faced a technology designed to simulate the relational attunement that is its core mechanism.

This is what is at stake. Not whether AI is good or bad. Not whose tribe wins the debate. But whether we can learn to use this extraordinarily powerful technology in ways that serve life rather than diminish it.

The same question, it turns out, that we have been failing to answer about ourselves for ten thousand years.

Even this isn’t recognition. It’s just words pointing toward something that must be lived.

Terry Cooke-Davies writes from Folkestone, UK, where he is completing a book on why knowing better doesn’t make us do better. He uses AI daily and takes full responsibility for every word above.