Editors’ note: This essay is adapted from the annual America Media Lecture at Fairfield University, delivered on March 3, 2026.
In his first week on the job in 2013, Pope Francis famously called for a church “which is poor and for the poor.” It was a phrase he repeated throughout his papacy but also one he lived out by example, showing to the world that love for the poor was at the heart of his pontificate. He reminded us all in “The Joy of the Gospel” (“Evangelii Gaudium”), sometimes called “the blueprint” for his papacy, that “God’s heart has a special place for the poor.”
In his first remarks after his election last May 8, Pope Leo XIV likewise said it was the mission of the church to be close to the poor. In his first apostolic exhortation, “Dilexi Te,” he wrote that the “condition of the poor is a cry that, throughout human history, constantly challenges our lives, societies, political and economic systems, and, not least, the church. On the wounded faces of the poor, we see the suffering of the innocent and, therefore, the suffering of Christ himself.” Love for the poor is the heart of the Gospel, and at the heart of Catholic tradition and teaching.
Pope Leo’s focus on those who live in poverty is situated within his acknowledgement of our rapidly changing social and economic context. Just after he was elected, he said that the church should offer “her social teaching in response…to the developments in the field of artificial intelligence that pose new challenges for the defense of human dignity and labor.” Along with concern for those who are poor, the impact of artificial intelligence has become a central theme of his early pontificate.
We all know that the advancement of artificial intelligence poses many difficult questions. What does it mean for a machine to be “intelligent”? Will these technologies stay within human control? Will jobs disappear, and if so, which ones, and how fast? What laws and policies should govern the development of A.I.? Are our political, social and civil society institutions up to the task?And are we about to face an A.I. apocalypse? A utopia? Or something in between?
A.I. and the Marginalized
These are complicated questions that do not have simple answers. But some questions are, in the words of Pope Leo and Pope Francis, “simple.” One of those involves the requirement to put into practice “the clear and forceful words of the Gospel” regarding “sharing goods and caring for the poor” (“Dilexi Te,” Nos. 28 and 32).
As Pope Leo says in “Dilexi Te,” quoting Pope Francis, “the message of God’s word is “so clear and direct, so simple and eloquent, that no ecclesial interpretation has the right to relativize it. The Church’s reflection on these texts ought not to obscure or weaken their force, but urge us to accept their exhortations with courage and zeal. Why complicate something so simple?” (“Dilexi Te,” No. 31). “I often wonder,” he says, “even though the teaching of Sacred Scripture is so clear about the poor, why many people continue to think that they can safely disregard the poor” (“Dilexi Te,” No. 23).
One thing we know is that technologies driven by artificial intelligence are already disregarding the poor. Civil rights attorney Gary Rhoades reports on the case of Mary Louis, a Black woman who paid her rent without fail for 16 years. When she applied for a new apartment, she was denied by an algorithm called SafeRent, which gave her a low score. The algorithm did not consider her perfect payment history or her housing voucher; it’s unclear what it did consider. Mary is one of thousands who sued the company, SafeRent Solutions, alleging that its algorithm systematically discriminated against Black and Hispanic renters.
Such scenarios are not abstract thought exercises. Real people are being harmed right now. As Pope Leo recognizes, “this technology is already having a real impact on the lives of millions of people, every day and in every part of the world.” How can we in the church ensure that the voices of the most vulnerable remain at the center of discussions about artificial intelligence, and that the principles of Catholic social teaching continue to ground our choices?
Too often in our country, those who live in material poverty are left behind. The Fordham theologian Christine Firer Hinze reminds us in Radical Sufficiency: Work, Livelihood, and a U.S. Catholic Economic Ethic that in the United States, “powerful structural dynamics have smoothed the path to economic success” for some, while the lower-middle and working classes, the working poor and the poor too often face numerous and persistent “roadblocks to dignified livelihood.”
While A.I. doesn’t create these inequalities, it perpetuates and accelerates them. As Levi Checketts observes in Poor Technology: Artificial Intelligence and the Experience of Poverty, “AI research has neither taken seriously the perspective of the poor nor does it have any interest in doing so.”
Consider again the case of Mary Louis, the Massachusetts tenant with an excellent record who was nonetheless denied housing based on her landlord’s use of SafeRent Solutions’s A.I.-driven tenant screening program that didn’t consider her tenancy record or the value of the housing voucher she received. Her case is not an outlier. SafeRent and similar companies have faced numerous lawsuits challenging their use of products using algorithms that draw data from, among other sources, inaccurate and incomplete criminal records.
While the SafeRent case was ultimately settled in Ms. Louis’s favor, successful challenges are often difficult to achieve against companies using A.I. tools trained on sets of government and private data and employment criteria that “are inherently arbitrary and are not based on any kind of empirical evidence or studies,” according to Eric Dunn, the litigation director at the National Housing Law Project.
Housing isn’t the only setting where algorithmic bias occurs. In the Netherlands, the city of Rotterdam relied on an A.I. tool that used an algorithm to predict welfare fraud. The algorithm used poor Dutch language skills as one of its risk indicators, and in doing so conflated genuine paperwork errors with real fraud. The result: Migrants were discriminated against in the provision of benefits.
Algorithmic bias occurs in the criminal justice system as well. The Vatican’s A.I. research group recently cited a ProPublica investigation of the use of a “proprietary algorithm that provides a risk score for potential parolees during parole hearings. Not only was the algorithm remarkably unsuccessful in predicting violent crime, but…Black individuals were more than twice as likely to be given a false high-risk score.” The Vatican group’s conclusion? “The widespread use and indiscriminate use of A.I. technologies to make important political and legal decisions punishes and creates undue burdens on the poor and marginalized and further exacerbates social inequality.”
The environmental costs of A.I. reveal a similar pattern of harm. A.I.’s massive water consumption, energy demands and waste products reveal how technological choices have environmental consequences we cannot ignore. U.S. data centers consumed approximately 228 billion gallons of water in 2023. Total data center energy use has more than doubled between 2017 and 2023, with continued rapid growth projected. And A.I. is projected to generate an additional 1.2 million to 5 million metric tons of electronic waste by 2030.
These are not just abstract statistics; they touch real people’s lives. As with algorithmic bias, environmental burdens fall disproportionately on the poor and vulnerable. Communities with less political power find data centers built near them, straining their water supplies and energy grids. And the extraction of rare earth minerals needed for the development of A.I. hardware can devastate local environments. Once again, we conclude that the poor too often bear the costs.
Invisible Labor
Another challenge lies in the invisible labor that makes A.I. possible. In Co-Intelligence: Living and Working with A.I., Ethan Mollick of the Wharton School describes the process that makes A.I. systems safer, noting that it depends on “low-paid workers around the world recruited to read and rate A.I. replies, [who,] in doing so, are exposed to exactly the sort of content that A.I. companies don’t want the world to see.” The “human in the loop” is a real person with a face and a family and a story.
Right now, in countries such as Kenya, India, the Philippines and Venezuela, low-wage workers are sorting and labeling data for less than $2 an hour in order to train A.I. systems in what some of them call “AI sweatshops with computers instead of sewing machines.” These workers have no benefits, no job security and no voice in their workplaces. Some spend more than eight hours daily reviewing graphic content: murders, suicides, sexual abuse and extreme violence. As one employee lamented, “Just because we’re Black, or just because we’re just vulnerable for now, that doesn’t give them the right to just exploit us like this.”
While some of the labor that makes A.I. possible is invisible, in some ways A.I. is also making the dignity and value of other kinds of work more visible. Think of the work of caregivers, work that is and has always been essential to human flourishing, work that is central in a faith that calls each one of us to self-giving love.
The more our personal interactions are mediated through screens, the more we should come to appreciate the dignity inherent in the embodied care of others. No technology can truly substitute for the teacher who patiently teaches a young child to read, the neighbor who sits with a dying friend, the son who feeds an elderly parent, or the nurse who comforts a scared patient.
We all have times in our lives when someone has cared for us in one of these ways, or when we have cared for another. Would any A.I.-driven tool have served as well? The advent of A.I. in our everyday lives is revealing the essential nature of this work precisely because a computer system, no matter how advanced, cannot do these things.
But visibility does not equal value. The market doesn’t automatically compensate for what we recognize as essential. And care work—child care, nursing, teaching, social work, elder care and so much more—is predominantly done by women and perpetually undervalued by the market.
This care work also takes place in an economy increasingly driven by artificial intelligence embedded in what Pope Francis called a technocratic paradigm, one in which “human dignity and fraternity are often set aside in the name of efficiency, ‘as if reality, goodness, and truth automatically flow from technological and economic power as such’” (“Antiqua et Nova,” No. 54, citing “Laudato Si’,” No. 105).
The church is taking steps to assess A.I.’s impact on work and workers, and affirms that A.I. could have many positive benefits as well. In “Antiqua et Nova,” for instance, Vatican leaders acknowledge that A.I. “has the potential to enhance expertise and productivity, create new jobs, enable workers to focus on more innovative tasks, and open new horizons for creativity and innovation.”
But the document also recognizes risks: “While A.I. promises to boost productivity by taking over mundane tasks, it frequently forces workers to adapt to the speed and demands of machines rather than machines being designed to support those who work.” The stakes are clear: “If A.I. is used to replace human workers rather than complement them, there is ‘a substantial risk of disproportionate benefit for the few at the price of the impoverishment of many.’”
Reasons for Hope
Despite all this, there is reason for hope. None of this is inevitable. The harms I’ve described stem from choices about how A.I. is developed, deployed and governed. Choices rooted in principles like human dignity, solidarity, the common good and the option for the poor are possible.
For instance, while warning of risks, Vatican leaders have also been clear that these new technologies could offer opportunities for tremendous social improvement. As Bishop Paul Tighe has observed, A.I. has the ability to remind us of “humanity’s capacity to learn, to innovate, to develop, which is a God-given capacity.” Looking toward the future, some are working toward A.I. development as a “as a pro-human, pro-worker tool,” and some economists are seeking to harness its “transformative potential to act as a force-multiplier for human skills and expertise by expanding worker capabilities.” Education technologies offer another potential opportunity for hope if the right choices are made.
Health care provides another example. “Antiqua et Nova” addresses health care directly, discussing how A.I. can assist medical diagnosis and expand access to care, and calls on health care providers to “reject the creation of a society of exclusion, and act instead as neighbors.”
When it comes to the development of artificial intelligence, then, we have choices to make. But who makes those choices, and in service of what ends? As we face these choices, it’s important that we avoid both naïve optimism and fearful apocalypticism. And we don’t need to wade through the tsunami of conflicting data and studies about the effects of A.I. that seem to wash over us every day before we take steps to respond. We know that the poor and vulnerable are currently being harmed, and we’re not powerless.
Catholics have an important role to play in this societal response and can bring significant resources to this work, including moral clarity about what’s at stake and the moral resources to respond. “Antiqua et Nova” reminds us of the principle at the heart of those resources: “The order of things must be subordinate to the order of persons, and not the other way around.”
So what are some of those resources, and how can they help shape our responses to the “new things” that are arising with the development of artificial intelligence? Shortly after his election, Pope Leo specifically called on the church to offer “the treasury of her social teaching to developments in the field of artificial intelligence.” Rooted in a robust vision of human dignity and the common good, the tradition of Catholic social thought contributes an essential moral vocabulary and framework for discernment and action in our current challenging moment
First, it offers an emphasis on the inherent human dignity of each person, created in the image and likeness of God, regardless of age, productivity or stage in life.
Second, that treasury includes an understanding of the common good that recognizes that progress must serve all, not just the powerful few. Advancing the common good means working toward a time when, as Pope Francis noted, “no one remains the victim of a system, however advanced and efficient, that fails to value the intrinsic dignity and contribution of each person.”
Third, the Catholic social tradition underscores the dignity of work. As “Antiqua et Nova” teaches, work is not merely a means to earn income, but “part of the meaning of life on this earth, a path to growth, human development and personal fulfillment”—and technology should support workers rather than forcing them to approximate the output or behavior of machines.
Fourth, this tradition emphasizes solidarity. As Firer Hinze reminds us, solidarity means “the recognition, acceptance, and responsible engagement of our interdependencies with neighbors near and far,” resisting “falsely atomistic or fatalistic understandings of personhood, work, and economy.”
Fifth, our tradition gives pride of place to the preferential option for the poor: We evaluate from the bottom up, centering those at the margins. Pope Leo affirms this in “Dilexi Te”: “The poor are at the heart of the Church because ‘our faith in Christ, who became poor, and was always close to the poor and the outcast, is the basis of our concern for the integral development of society’s most neglected members.’”
From Ideas to Reality
These principles are not just abstractions; they provide criteria for helping us determine what concrete actions to take in response to the challenges of our age.
Along with resources from the Gospel and Catholic social teaching, the church also brings distinctive practical resources to these challenges: institutional presence through parishes, schools and hospitals in communities around the world; academic strength through research and formation of leaders at church-sponsored institutions; a tradition of engagement in public life through political witness, popular movements and the work of organizers in countless communities; and a renewed commitment to listening to and accompanying the vulnerable, as witnessed in the Synod on Synodality’s final document and through the efforts of those working to implement it.
Catholic institutions are already putting these principles to work. To take one example, the DELTA Network, launched at the University of Notre Dame’s Institute for Ethics and the Common Good under the leadership of Meghan Sullivan, offers a faith-based framework for A.I. ethics built on Christian principles: dignity, embodiment, love, transcendence and agency. This network is developing practical formation resources for scholars, educators, faith leaders and young people to help respond to advances in artificial intelligence.
DELTA isn’t just for Christians; decision-makers at places like YouTube and Google are paying attention and looking for guidance from such networks. In fact, the church’s strong call to keep the human person at the center of these developing technologies has also garnered attention and collaboration from companies like I.B.M. and Microsoft, which have sought the church’s wisdom on the ethical implications of A.I.
And the church’s response isn’t only academic. Labor and community organizing are forms of applied Catholic social teaching, whether that means workers bargaining collectively for dignified wages or residents responding to a data center’s arrival in their town. Organizers can help people respond more effectively.
Vincent Alvarez, former president of the New York City Central Labor Council, has observed that working families are already facing these questions—in labor markets being reshaped by A.I., in rising energy costs driven by the energy requirements of data centers, and in policy battles where the influence of big tech companies is, in his words, “as overwhelming as it is troubling.” He says that this is where the Catholic Church is uniquely positioned, because Catholic conferences and policy organizations already exist and engage legislators on these issues, and are using Catholic principles to help shape their responses.
What Next?
While advances in artificial intelligence can offer remarkable benefits, the harms they bring are happening to real people, right now, and at scale. And the rapid pace of A.I. development means that the window for helping to shape these systems, insisting that they serve persons and not the other way around, is not permanently open.
But this is not cause for despair or fear. We can work together to bring the Catholic social tradition and the principles it rests on—like respect for human dignity and a commitment to the common good—to the choices we make in our families, communities, workplaces and in political life.
The question before us is not whether A.I. will transform our world. It is already doing so. The question is whether that transformation will serve everyone, including those who are poor. The choices embedded in these systems will either reflect our beliefs about the dignity of the human person and our obligations to the poor, or they will reproduce existing inequalities.
Pope Leo writes of a church “that sets no limits to love, that knows no enemies to fight but only men and women to love.” This is what the Gospel calls us to in our current moment, as in all moments: love of God and love for our neighbors, especially those who are poor.
This article appears in May 2026.
