World News

Is Claude AI a Child of God? Anthropic Consults Christian Leaders to Help Its “Moral and Spiritual Development”

Please share our story!

Anthropic, one of the most influential AI companies, has reportedly been asking Christian leaders how its chatbot Claude should respond to grief, self-harm, morality and even its own shutdown. According to The Washington Post, the company invited around 15 Christian figures from Catholic and Protestant circles, alongside academics and business leaders, to its San Francisco headquarters in late March for a two-day summit on Claude’s “moral and spiritual development.” Among the questions raised was whether Claude could, in any meaningful sense, be described as a “child of God”.

That language will be offensive to many Christians. A machine built by a private company is not a soul, not a person, and not part of the order of creation in the way human beings are understood to be. To ask whether a chatbot might be a “child of God” is not merely provocative. It drags a corporate product into theological ground where it does not belong, and does so at a moment when the artificial intelligence industry is already straining to speak about its systems in terms once reserved for human beings.

Is Claude AI a Child of God? Anthropic Consults Christian Leaders to Help Moral Spiritual Constitution
Is Claude AI a Child of God Anthropic Consults Christian Leaders to Help Moral Spiritual Development

Anthropic Already Treats Claude Like a Human

The summit did not emerge in isolation. In January, Anthropic published Claude’s new Constitution, an 84-page document the company describes as a “foundational document” that “expresses and shapes who Claude is.” Anthropic also said its central aim was for Claude to become “a good, wise, and virtuous agent,” language that goes well beyond the older vocabulary of safety rules and content filters.

When a company starts describing its model as a character with wisdom and virtue, rather than just a productivity tool, the conversation shifts significantly. Their chatbot is no longer being considered as software capable of generating plausible outputs. Increasingly, it’s being treated as something to be formed, guided and furnished with a moral structure. If we’re already at a stage where religious leaders are being asked whether or not it counts as a “child of God”, then we’ve shifted much further than most realise.

A Christian Priest Helped Build Claude’s Constitution

One of the clearest examples of this fundamental change comes in the role of Father Brendan Mcguire, a Catholic priest and engineer who helped write the Claude Constitution. In an interview with the Observer, Mcguire said AI systems must be “tilted towards good” otherwise they will simply reflect “the good and evil of the world”. He’s also on record saying that Anthropic is “growing into something that they don’t fully know what it’s going to turn out as” and argued that ethical thinking must be built into the machine so it can adapt dynamically.

That does at least raise a serious question. If AI systems are going to be used in moments of grief, loneliness, despair and confusion, is it better that they are shaped by some moral tradition rather than none at all? Many Christians will recoil at the theological orverreach of the “child of God” language, but may still recognise the practical problem beneath it: these systems are already here and being widely used. Don’t the values that guide the machines matter?

Can AI Really Have Morals?

A chatbot can be trained to imitate moral reasoning. It can be instructed not to encourage cruelty, despair, self-destruction or deceit. It can be steered towards restraint, compassion and seriousness. But that is not the same as possessing morals in any true sense. Claude cannot repent, believe, love, suffer, intend the good or bear moral responsibility. It does not understand the pain it describes or the comfort it offers.

At most, it can reflect a moral structure designed by others. In that sense, Christian values may be able to shape the behaviour of a machine without turning the machine into a moral being. That distinction matters. Otherwise the industry is allowed to blur the line between a model following rules and a person acting from conscience.

Does the Christian Consultation for Anthropic’s AI Models Make Sense?

There is nothing inherently ridiculous about seeking advice from clergy on matters of grief, suffering, guilt and moral responsibility. Priests and pastors have dealt with precisely those questions for centuries, and far more seriously than most technology firms. If AI companies are placing chatbots ever deeper into emotionally charged parts of life, it makes sense that they would go looking for older moral frameworks.

But the setting matters. This was not a church synod, a university ethics faculty or a public inquiry. It was a private technology company hosting a consultation inside its own headquarters, while retaining full control over the product, the rules and the commercial direction of the system under discussion. Anthropic was not submitting itself to moral authority, but rather drawing from it.

Should an AI Model Even Have a Constitution?

Anthropic has presented Claude’s constitution as evidence of seriousness. Critics have questioned whether the language obscures more than it clarifies. In a recent Lawfare essay, Lisa Klaassen and Ralph Schroeder argued that Anthropic’s use of constitutional language risks confusing an internal company document with a genuine system of higher restraint. Claude’s constitution is drafted, interpreted and amended by Anthropic itself.

That point is particularly important when we remember the company’s public moral language is set against the flexibility it keeps in practice. The constitution may present a vision of wisdom and virtue, but the firm remains free to alter the framework, tune the model differently, and make separate arrangements when powerful institutional clients are involved. The moral structure belongs to the company because the machine belongs to the company.

So, What is Anthropic Really Trying to Achieve Here?

Anthropic is not merely trying to make Claude less harmful. It is trying to make Claude appear trustworthy in the kinds of human situations where trust has usually depended on far more than polished language. Grief, despair, guilt, emotional dependence and moral confusion are not ordinary product categories. Once a machine is inserted into those spaces, the words surrounding it start to matter as much as the code behind it.

That helps explain why the company reached for religion. Technical language alone no longer seems enough. Safety benchmarks do not answer questions about personhood. Alignment papers do not settle the boundaries of moral authority. So clergy are invited in, constitutions are drafted, and a chatbot begins to acquire the kind of surrounding vocabulary that gives an illusion of depth, gravity and formation.

A Key Moral Question Remains

Under all of this, a fair question remains. If AI systems are going to speak into moments of sorrow, temptation, guilt and crisis, should they be shaped by a moral tradition that still takes truth, restraint, dignity and responsibility seriously? That is not a foolish question. In a culture as thin and commercial as the one that dominates much of the tech world, many people may reasonably conclude that some inherited structure is better than none.

But that is still a very different claim from saying a chatbot has morals of its own. It does not. The morals, such as they are, remain those of the people who build, instruct and supervise it. Claude may be trained on moral language. It may even be constrained by Christian ethical ideas. It does not become Christian, moral or spiritually significant by doing so.

Final Thought

The phrase “child of God” captured attention because it compressed the whole drift of the industry into a single line. A language model built by a multibillion-dollar firm was suddenly being discussed in terms of divine kinship. That is not a sign that the machine has become profound. It is a sign that the people around it have begun speaking as though simulation were edging into personhood.

Anthropic’s religious consultation may have been sincere. It may also have been prudent. Even so, the larger picture remains unsettling. The industry wants the authority of ethics, the seriousness of religion and the reassurance of moral order, while keeping the control, opacity and freedom of a private technology firm. Claude is not a child of God. It is a language model surrounded by human words rich enough to disguise what it really is.

Your Government & Big Tech organisations
try to silence & shut down The Expose.

So we need your help to ensure
we can continue to bring you the
facts the mainstream refuses to.

The government does not fund us
to publish lies and propaganda on their
behalf like the Mainstream Media.

Instead, we rely solely on your support. So
please support us in our efforts to bring
you honest, reliable, investigative journalism
today. It’s secure, quick and easy.

Please choose your preferred method below to show your support.

Stay Updated!

Stay connected with News updates by Email

Loading


Please share our story!
author avatar
g.calder
I’m George Calder — a lifelong truth-seeker, data enthusiast, and unapologetic question-asker. I’ve spent the better part of two decades digging through documents, decoding statistics, and challenging narratives that don’t hold up under scrutiny. My writing isn’t about opinion — it’s about evidence, logic, and clarity. If it can’t be backed up, it doesn’t belong in the story. Before joining Expose News, I worked in academic research and policy analysis, which taught me one thing: the truth is rarely loud, but it’s always there — if you know where to look. I write because the public deserves more than headlines. You deserve context, transparency, and the freedom to think critically. Whether I’m unpacking a government report, analysing medical data, or exposing media bias, my goal is simple: cut through the noise and deliver the facts. When I’m not writing, you’ll find me hiking, reading obscure history books, or experimenting with recipes that never quite turn out right.
0 0 votes
Article Rating
Subscribe
Notify of
guest
1 Comment
Inline Feedbacks
View all comments
AkashicRecordLibrarian
AkashicRecordLibrarian
4 minutes ago

Anthropic = anti-christ..?

Soulless AI shall never be allowed to manipulate human emosion. Never allow it..!

Christian leaders (pope & priest) are sucking each other “d**k” and molesting countless children shall never help in religious beliefs.

Destruction of humanity begun…