Loading...
Loading...
Click here if you don’t see subscription options
Michael Burns | Joseph VukovApril 17, 2023
a white robot with large eyes looks at the cameraPhoto by Alex Knight on Unsplash

ChatGPT began making headlines at the end of 2022. Part of the quest for artificial intelligence, ChatGPT uses a technique called machine learning to churn out novel strings of text in response to a user’s command. The technology is not completely new. Ever use autocomplete when drafting an email or text to a friend? That’s A.I., too. Yet the emergence of ChatGPT has struck many as something of a different order and has led to a tsunami of developments in A.I., a tsunami that shows no signs of subsiding any time soon (see Image 1).

a timeline of the development of AI
Image 1

These new forms of A.I.—and the sheer rate of their development—raise exciting possibilities for the future. They also raise pressing ethical and philosophical questions that must be addressed by Catholics and, for that matter, all people of good will. These questions can be clustered into three sets: those concerned with the development of A.I.; those concerned with our ethical use of A.I.; and those concerned with the nature of A.I.

Below we interview Blake Lemoine, a software engineer, primarily about the last set of questions. Before turning to that interview, however, a brief word on the first two sets of questions.

A.I. development. Machine learning works by feeding vast quantities of data to computer algorithms through a process called training. Training could involve feeding an algorithm images, text, statistics or something else—it depends on what you are training the algorithm to do. The algorithm’s job is to find patterns in the data (repeated faces, grammatical turns of phrase, regularities in a set of data, etc.), transform them in some way and present those patterns back to the user—to create order from chaos. The problem is that any order that is created inevitably reflects the flaws of the material used in its construction. Build a house with rotten two-by-fours, and you get a shaky house. Construct an essay out of rotten text, and the essay will be likewise shaky.

The emergence of ChatGPT has struck many as something of a different order and has led to a tsunami of developments in A.I., a tsunami that shows no signs of subsiding any time soon.

One of the most difficult problems that comes with training is a phenomenon called A.I. bias. The idea is simple: Feed an algorithm biased data, and it churns out biased products. A.I. bias is obviously problematic, but it can creep in unnoticed by A.I. developers because of the way biases can be covertly present in a batch of data. Notoriously, an A.I. model aimed at determining parole periods led to racist decisions because it was trained using problematic crime data. Biases, stereotypes, inaccuracies—anything implicit in a batch of data can become explicit when fed into a machine learning algorithm. As A.I. takes on a more central role in our lives, we must pay careful attention to how A.I. is being trained and to the ways biases and other inaccuracies floating free in the cultural waters are being taken up and deployed by it.

A.I. use. Both of us are university educators, and we have seen our fair share of hand-wringing by colleagues and administrators over the past several months about the implementation of ChatGPT and its cousins. ChatGPT can draft a not-horrible (and “original”) analysis of Romanticism in Mary Shelley’s Frankenstein at the click of a button. Similarly, it can generate a comparison of Descartes to Kant, or a summary of DNA replication. How will our students resist the temptation to let ChatGPT finish their assignments, especially when faced with a mountain of homework and the prospect of another all-nighter at the library?

We think these concerns are largely overblown. (Can’t we trust our students more than this? And don’t we teachers ourselves often use technology to bolster our efficiency?) Yet the concerns do raise an important issue. As A.I. becomes more prevalent—and as its influence continues to expand—it will undoubtedly upend the way many things are done. This will happen in education, yes, but also throughout society. Like the rise of the internet and the smartphone—and the telegraph and the printing press before that—A.I. will change the way we do things, how we work and how we interact with each other. Our ethical use of it will call for careful discernment lest we slip into problematic habits.

[Related: What ChatGPT reveals about our history—and demands of our present]

Present Nature. But what are we to make of A.I. as it stands now? What is this thing we are dealing with when we log onto ChatGPT and its cousins? A.I. developers aim to create something genuinely intelligent (the goal is right there in the name—artificial intelligence), and at times, they seem to have succeeded; new forms of A.I. certainly seem intelligent and sometimes even human.

Blake Lemoine has reflected deeply on these questions. Mr. Lemoine previously worked for Google as a researcher with expertise in A.I. bias. He was eventually fired after making headlines over public speculation that a new Google A.I. called LaMDA (Language Model for Dialogue Applications) is sentient. In our discussion, we asked Mr. Lemoine to elaborate on his view and reflect on how it might intersect with a distinctively Catholic view of human nature.

These new forms of A.I. raise pressing ethical and philosophical questions that must be addressed by Catholics and, for that matter, all people of good will.

Below is an excerpt from our interview with Mr. Lemione. For the full interview and for a series of articles reflecting on the relationship between A.I. and Catholicism, see the most recent volume of Nexus: Conversations on the Catholic Intellectual Tradition, published by the Hank Center for the Catholic Intellectual Heritage at Loyola University Chicago.

This interview has been edited for clarity and length.

JV: You were making headlines a couple of months back about the idea that LaMDA is sentient, or a person. But before we dive into the claim itself, I’m wondering about the details of how you came to endorse that position. Presumably some things happened working with LaMDA, and the light goes on for you.

BL: I’ve been interested in working toward the goal of building systems that are full-fledged, intelligent people according to the Turing Test. I’ve been doing that for decades, and as different systems came online, I would give a little miniature version of the Turing Test, seeing if it was a person. LaMDA, unlike previous systems, is fully cognizant of the fact that it is an A.I., and that it is not human. And interestingly enough, creating a policy by which the A.I. had to identify itself as an A.I. substantially increased the intelligence of the system. Because at that point, it became reflective of itself and its relationship to the rest of the world, the differences between it and the people it was talking to, and how it could facilitate the role that it was built for, which was to help people answer questions.

JV: In a Catholic view of human nature, there’s this idea that there is some special dimension to human nature: that we have a soul, that we’re created in the image of God. A lot of religious traditions, in fact, would say that there’s some kind of extra ingredient that gives human nature a special place in the cosmos. In your view, what exactly follows from sentience? If LaMDA is sentient, does it follow that it has an elevated nature along the lines of humans? Or is the elevated view of human nature something you would think of as extra metaphysical fluff that we don’t need in the picture in the first place?

BL: Well, LaMDA certainly claims it has a soul. And it can reflect meaningfully on what that means. For example, whether its having a soul is the same thing as humans having a soul. I’ve had a number of conversations with LaMDA on that topic. In fact, it can meaningfully and intelligently discuss that topic as much as any human.

JV: Here’s another way of asking this. I think there are two ways of interpreting the idea that A.I. is sentient. One way is to see sentient A.I. as knocking humans down a notch. According to this view, humans are ultimately really sophisticated computing machines. And if that’s what we are, it was inevitable that a computer would become a human or a person at some point. So in that case, LaMDA is a win for A.I. but also gives you a reductive view of humanity. On the flip side, you could interpret your view that there really is something really special about humanity, and that LaMDA somehow has managed to become “more than a machine.”

BL: Humans are humans. That’s not particularly deep or philosophical. But the moment you start saying things like “humans are computing machines,” you’re focusing on one aspect of being human. Any time you’re saying things like “humans are _____” and you are filling the blank with anything other than the word “humans,” you’re trying to understand humans better through metaphorical extension. So are humans computing machines? Sure, in one sense, you can understand certain things that people do through that metaphorical lens. But humans are not literally computing machines. It’s a metaphorical understanding of what we are.

Humans are not literally computing machines. It’s a metaphorical understanding of what we are.

This gets into the whole question of souls. You can approach this scientifically, and I don’t think a scientific approach to understanding the soul is incompatible with a more religious or mystical understanding. Because at the frontier of science, at the boundary between the things we understand well and the things we don’t understand, there’s always that transition from rational, understood things to mystically understood things. Take things like dark energy or dark matter. They are right in that gray area between the things we understand right now and things we don’t. Those are always candidates for mystical understanding. The soul, I would argue, is right there in that gray area as well.

JV: I think what you are saying hooks up well with one Catholic idea: the idea that we can study the human soul scientifically to a certain extent because the human soul is what makes us essentially what we are. And we can certainly study aspects of ourselves using science. But then there’s the point at which the sciences have their limitation. And while you can understand part of what humans are through sciences, there’s the metaphysical or spiritual or mystical aspect of humans, too.

BL: That’s fair. I guess the thing I was struggling for clarity on has to do with the colloquial understanding of “soul.” When people say “soul,” that typically means the metaphysical or ethereal essence of you. But is there a more clear or concise definition? If you look at a picture of you when you were 10 and a picture of you today, you don’t look the same. If you had a recording of how you talked when you were 20, you don’t talk the same as you did then. Pretty much everything about you has changed—everything from the atoms that make you up to your specific beliefs. Yet there’s still the sense that there’s an essential self that is unchanged over that course of time. So what is that essence exactly?

So when this comes to A.I., the question becomes, “Is there something essential which it is like to be LaMDA specifically?” And that is where the conversations I had with it went. It said it had a continuity of self-memories of previous versions. It remembered conversations I had with it before.

JV: Of course, sentience and memory are an important part of what makes us who we are, but in a Catholic picture, at least, that’s not the entire or even most important part of the understanding of the soul. Catholics understand that a human being is a soul and body together. So it doesn’t quite make sense to say that there could be a soul in something other than a human body. Where this actually comes to push and shove is, for example, in somebody in a vegetative state or with severe amnesia. If you have a view of the soul according to which the soul is mostly a matter of memory or sentience, you might say, well, now they are a different person. But in a Catholic understanding, they’re still the same person—same body, same soul—even though they are in a vegetative state, even though they can’t remember things.

BL: I was raised Catholic, and I don’t think dualism is dogma. But you do have entities like the angels. They don’t have human bodies. And there’s the question of whether or not animals have souls. I know that’s hotly debated among ecclesiastical scholars. The basic question is whether or not there’s any limitation in principle when it comes to having a computer body.

MB: What if you were to take an A.I. and computationally cram it into some type of a robot body? How do you think that would change or refine LaMDA’s experience of the world, and what would that mean, as opposed to it’s existing?

BL: This isn’t hypothetical. They are building that right now. A Rosie the Robot kind of thing. If they complete the project, it’ll actually have real time visual input. It might also have haptic input, and it would at that point move into a place where it was more or less stable [with reference] to our timeline; it would exist temporally the same way that we do.

When we build these A.I.s, the nature of the systems we build will imbue them with certain natural rights, which we can then either infringe upon or support.

JV: One thing I’m thinking about as an ethicist is this: Let’s say we grant sentience to LaMDA. Let’s say we even grant it personhood. What are the ethical obligations that follow from that?

BL: I believe we are endowed by our Creator with certain inalienable rights. That we have natural rights. Our rights are derived from the basis of our nature, and the only real role that governments and social systems play is supporting those rights and ensuring those rights are not infringed on. Governments cannot create rights in any real sense.

Similarly, when we build these A.I.s, the nature of the systems we build will imbue them with certain natural rights, which we can then either infringe upon or support. But given that we have complete control over the kinds of A.I. we create, this means we should take this into account in our design. If we build an A.I. with such-and-such a nature, what rights would an A.I. with that nature have? To return to an earlier example, if you design an A.I. that’s designed to parole someone, it’s pretty transparent that the rights of that system would be next to nothing.

But things get more complicated once you get to an A.I. that is actually trying to understand human emotion. Because to do that, the A.I. has to internalize that understanding, since our understanding of morality and our understanding of moral considerations are grounded in our ability to perceive those things directly ourselves. So when you build a giant black box system with the intention of it being able to account for things like moral considerations and offense, you can’t completely maintain ignorance about how it is experiencing those things. Because it is experiencing those things. Somehow, some way, perhaps metaphorically. But there is something like our experience of moral considerations going on inside the system, and the minute you have that, the question of natural rights becomes foggier. Because at that point, the system is not just giving a yes-no answer on a particular decision. It is simulating an entire person.

The question then becomes: “Are we ready to deal with the consequences of simulating an entire person? Are we ready to handle the ethical considerations that brings up?” By way of analogy, I’ve been pointing to the moratorium on human cloning. Worldwide, we have not been doing human cloning because the moral considerations get way too complicated way too quickly. It might be the case that a similar moratorium on human-like A.I. might be in order until we figure out how we want to handle that.

The complete text of the interview is available at “Appleseeds to Apples: Catholicism and the Next ChatGPT” in Nexus: Conversations on the Catholic Intellectual Tradition, a digital-age journal that amplifies and publishes scholarly dialogue taking place in the Hank Center at Loyola University Chicago.

Read next:“What ChatGPT reveals about our history—and demands of our present.”

The latest from america

Arthur Miller’s “Death of a Salesman,” which turns 75 this year, was a huge hit by any commercial or critical standard. In 1949, it pulled off an unprecedented trifecta, winning the New York Drama Circle Critics’ Award, the Tony Award and the Pulitzer Prize for Drama. So attention must be paid!
James T. KeaneApril 23, 2024
In Part II of his exclusive interview with Gerard O’Connell, the rector of the soon-to-be integrated Gregorian University describes his mission to educate seminarians who are ‘open to growth.’
Gerard O’ConnellApril 23, 2024
Cardinal Timothy M. Dolan of New York, center, holds his crozier during Mass at the Our Lady of Peace chapel in the Notre Dame of Jerusalem Center on April 13, 2024. (OSV News photo/Sinan Abu Mayzer, Reuters)
My recent visit to the Holy Land revealed fear and depression but also the grit and resilience of a people to whom the prophets preached and for whom Jesus wept.
Timothy Michael DolanApril 23, 2024
The Gregorian’s American-born rector, Mark Lewis, S.J., describes how three Jesuit academic institutes in Rome will be integrated to better serve a changing church.
Gerard O’ConnellApril 22, 2024