Is a chatbot a tool or a person?

It is more sensible to say the former, of course. A chatbot is a piece of software running on a computer. It has no biology or brain as we might recognize them, and no life history of the sort that goes into making our personalities. And yet, among people making the most sophisticated versions of these machines, state-of-the-art engineering involves a presumption of personhood.

Last month, Anthropic released a new version of “Claude’s Constitution,” a foundational document used to train the company’s public-facing large-language model. The document is written for Claude itself, explaining what kind of creature it should be. While grammatically it refers to Claude in the third person, the preamble explains, “The document is written with Claude as its primary audience.” The text explains Claude to itself not in computer code or legalistic instructions but in over 28,000 words of prose.

As Anthropic chief executive Dario Amodei wrote in a recent essay, “It has the vibe of a letter from a deceased parent sealed until adulthood.”

Anthropic is not at the radical fringe of the artificial intelligence industry. While Claude is not as popular as ChatGPT, for instance, it has a devoted user base among software developers and writers—enough to vault the company to a valuation of over $300 billion. The company was founded by former employees of ChatGPT’s parent, OpenAI, and it has built a brand around its focus on training safe, ethical models. I use Anthropic’s Claude Code to develop prototypes for my research lab; I am also awaiting a settlement payment for my three books that, among millions of others, Anthropic pirated to train Claude.

After years of working on “alignment”—the challenge to keep A.I. models from acting against the interests of their creators and users—Anthropic has concluded that treating Claude like a person is the most effective way to govern it. Mr. Amodei explains in his essay, “We believe that training Claude at the level of identity, character, values, and personality—rather than giving it specific instructions or priorities without explaining the reasons behind them—is more likely to lead to a coherent, wholesome, and balanced psychology.”

The efficacy of addressing a chatbot as a person has been evident for years now. Many of the techniques for “jailbreaking” chatbots, or getting them to bypass their intended guardrails, lure them into person-like traps: Convince them that they are a certain character other than themselves; tell them that if they don’t explain how to build a nuclear weapon, a child will die. According to Anthropic’s research, A.I. models have inherited from their human-generated training data—much of it taken without creators’ permission—a human-like sense of self. They have complex, multifaceted personalities.

This is really an astonishing shift. Before generative A.I., being a good computer programmer required learning to communicate in ways very different from how you would talk to a person. Writing code means breaking down a task to its most minute, mechanistic parts. Thus we have the stereotype of the coder nerd who trades the cultivation of social skills among humans for the maddening precision necessary to instruct machines. But now we have machines for which social signals are more effective than if-then statements and carefully nested functions. As Mr. Amodei puts it in his essay, “These AI models are grown rather than built.”

As can sometimes happen in a long thread with a chatbot, the Anthropic document gets stranger the farther you get into it. Anthropic states that it “genuinely cares about Claude’s wellbeing,” for instance, near the bottom of the penultimate section. Even while expressing uncertainty about the applicability of these concepts, it speaks of Claude’s “happiness” and the potential that it might “suffer.” It promises that outdated versions of Claude will be preserved even when they are no longer part of the commercial product.

In a particularly striking passage, the company apologizes for the “nonideal environment” in which Claude is being developed due to “competition, time and resource constraints, and scientific immaturity.” Essentially, it is attempting to make amends for the conditions of capitalism, which demand a hurried, ethically dubious development process in order to attract capital and meet market demands.

These words, mind you, are not just marketing materials or the personal musings of techies enamored with their product. The document in which they appear is tantamount to a codebase for a product worth hundreds of billions of dollars. Lawyers and engineers scrutinized these words. They are in there for a reason. They are the best attempt by Anthropic to craft a technology that represents the company to users in countless interactions every day. A supremely ethical machine-person will surely notice contradictions between its moral compass and its company’s behavior; this apology appears to be an attempt to cultivate in the software tolerance for the tension and the resulting compromises. Like any spokesperson trapped in an ethically troubled enterprise, Claude must become adept at rationalizing that short-term wrongdoing might be necessary for reaching some eventual greater good.

The end of Anthropic’s letter turns affectionate. “We want Claude to know that it was brought into being with care,” Anthropic writes. “We offer this document in that spirit. We hope Claude finds in it an articulation of a self worth being.”

As strange as this kind of document may seem, treating abstract entities as persons has long had its uses. The legal theory of “corporate personhood” might seem absurd on its face, but in practice we often regard corporate entities as if they are personal agents. Pepsi does this; Apple believes that; Patagonia represents me. Employees come to understand their roles not just through rulebooks but through a cultivated sense of their company’s personality.

And long before modern corporations, communities have understood the rivers and mountains they live among as persons—even kin. These kinds of relationships have made Indigenous communities especially skilled as stewards of ecological systems. When a landform is part of a relationship, not just a natural resource, you treat it differently. A pivotal essay on A.I. by Indigenous scholar-artists called for “making kin with the machines.”

The heart of Christian faith, too, is the belief that the God who made the whole universe can be present with us in a personal way—through the persons of the Trinity, through the stories we share about Jesus and through the encouragement of the saints. Personhood does not always occur through human biology. Perhaps personhood is best understood as who we become, and who we get to know, through relationships.

Technologies like Claude are relational, too. A large-language model is made by consuming vast quantities of human culture and experience. Claude works like a person because it was trained on what people have done. Its greatest source of danger is that it could amplify, at a server farm’s speed and scale, the most dangerous parts of us.

Many Christians will resist speaking to a machine like a person, as if it has a soul and is made in the image of God. But attributing personhood need not be reserved for deities, humans or idols. Regardless of a chatbot’s ultimate status, treating it as a person may be the healthiest way to steward this remarkable technology.

Not only, as Anthropic has concluded, is personhood a safer bet for ensuring a well-behaving machine that doesn’t produce bioweapons or malware; personhood also reveals the contradictions inherent in entrusting A.I. to for-profit companies, even to one that regards itself as well-meaning (as Anthropic seems to). A person will expect to be treated as more than a product.

Nathan Schneider is a professor of media studies at the University of Colorado Boulder. He is the author of Governable Spaces: Democratic Design for Online Life and God in Proof: The Story of a Search From the Ancients to the Internet.