Is artificial intelligence good news for the poor? A hot-button topic of the moment, AI is both championed and decried in our current cultural conversations. But in his new book Poor Technology: Artificial Intelligence and the Experience of Poverty (Fortress Press, 2024), Levi Checketts grapples with AI’s implications for those experiencing poverty, a concern often overlooked in such discussions.

Living Lutheran spoke with Checketts, an ethicist and assistant professor of religion and philosophy at Hong Kong Baptist University, about his book and the role of the church in navigating the world’s increasing reliance on AI.

Living Lutheran: Could you tell readers about Poor Technology?
Checketts: Poor Technology is about human dignity—what it means to talk about artificial intelligence as “smart” and what that means for those defined as “unproductive” in our societies. These two questions are interrelated. Those who define what “smart” means typically define it in a way that reflects who they are and, more importantly, rejects what they are not.

In the last hundred years or so, Western countries have typically defined intelligence closely with wealth generation. This then leads to a more important question: what is the relationship between rich and poor in our world, not just economically but also culturally? Why, for example, are we so keen to have machines do work like customer service and manual labor? Whose interests are at stake in this? It isn’t those doing the work right now!

It’s easy to talk about the poor through the language of numbers—how much they earn or don’t earn, the numbers of different demographics that make up the poor, the increasing wealth gaps between rich and poor, et cetera. But focusing on the numbers is the sort of thinking that computers do well, and it’s the sort of thinking that tries to reduce the depth and breadth of human life to something computable.

How did the idea for the book originate?
I was invited to a conference at Union Theological Seminary in New York City on religious and ethical responses to AI. … I gave a presentation on how the discussion surrounding AI often transgresses the two great commandments: love God and love your neighbor. I pointed out that some Silicon Valley engineers were saying AI is “the new god” or variations on this—including someone who started a “Way of the Future” church to worship AI—which are entirely idolatrous and reduce the ineffability of God to a superpowerful computer.


The discussion surrounding AI often transgresses the two great commandments: love God and love your neighbor.


I connected this to the love of neighbor because we Christians believe humans are made in God’s image, and if the image of God we lift up is the image of AI, then our image of a dignified person is an image of an AI. I made a point that this image was thoroughly dehumanizing to anybody whose thinking didn’t resemble AI algorithms, and was especially dangerous for the neurodivergent. … The positive response … gave me the confidence to develop this more.

It’s easy to assume AI’s potential dangers will resemble a futuristic science-fiction story, rather than perpetuating or worsening existing contemporary social structures. Could you say more about that?
I grew up in poverty, which underwrites a great deal of this book. It was certainly not as extreme as some have it, but this, I think, is a big part of the problem. We have certain assumptions about what poverty should look like and assumptions about what it means to be poor. We also have assumptions about how to “fix” poverty without doing anything to address what causes it. And we often don’t question whether our view is correct.

Take education as an example. Education is directed toward getting people out of poverty, so we often tout how much [money] different levels of education make or what the income of a certain degree program is, and so forth—and so we also tend to correlate intelligence with income. We often then correlate education with certain forms of thinking and expect our graduates to demonstrate the hallmarks of the upper classes, including appreciation of certain forms of culture, broader global awareness, conformity to a middle-class work environment, proper diction, certain assumptions about logic and narrative order, and so on.

It’s hard to articulate the experience of the poor from this register, but it’s the one the upper classes use. And it’s the model for AI training. So AI is inherently programmed to communicate in an anti-poor manner. It is supposed to follow the strategies of those who have “right thinking,” according to economic logic.

What do you feel the church can glean from your book?
I’m a Roman Catholic. I don’t make strongly explicit ecclesial or even theological claims in the book, so I think it’s as accessible to different denominations as to others of goodwill. … But it is written from a theological lens, especially from the explicit view of what we in the Catholic Church call the “option for the poor.”

My book takes this as an underlying motif and tries to address a double question: Can AI help uplift the voices of the poor? And does AI help us recognize the dignity of the poor? The first question is pretty well refuted throughout the book. But in my fifth chapter, I entertain the idea of what it would take to construct an AI genuinely built around the perspective of the poor, which would require a huge and deliberate effort at every step of the training process to incorporate poor perspectives. Although I remain skeptical about the possibility, I think the exercise shows what is entailed in really trying to take seriously the perspective of the poor in our decisions.

And the second question?
I am even more cynical on the second question, as I think a lot of the interest in thinking about AI as human helps to distract us from the very real human injustices occurring around us. It’s easier in many ways to treat AI as a person than to do the hard work of recognizing the humanity of the poor around us. The reason for this is because AI is made to resemble us—it’s conveniently designed around our fantasies. But human beings are much messier.


It is imperative that Christians speak clearly about those we already have moral obligations toward.


Overall, I hope the church approaches AI with skepticism, recognizing the limitations of AI and speaking out against anthropomorphisms or elevations of AI above or at the cost of the poor. It does promise many helpful advances if applied to the right areas and … with full understanding of its limits and biases. But so much of the conversation and expectations surrounding AI today can easily distract us from what truly matters and what our moral obligations really are.

I hope that ecclesial leaders, lay people, theologians and others engage in conversations surrounding AI with a clear sense of what our prior obligations are—the command to love neighbor still holds—and avoid the temptation to take at face value the most fabulous promises, or threats, made by AI prognosticators.

How do you hope readers engage with Poor Technology?
Nebulous concepts like “intelligence” have been redefined by different societies for different political ends for centuries. But we should note that in the biblical context, “intelligence” is never advocated as a precondition for dignity. Christians of good faith should be active in challenging ideas, economic policies, projects and other aims that promote AI above the dignity of the poor.

It is imperative that Christians speak clearly about those we already have moral obligations toward, which we don’t often uphold. … Most of all, a simple inquiry of cui bono—”who benefits?”—can solve many problems. If the poor cannot be said unequivocally to be benefited, we should pause. If we can see people benefit at the cost of the poor, we should resist. And if the poor are further disadvantaged and harmed, we should take drastic steps of change.

Finally we get to a moment of theological reflection—what is the image we see of ourselves? The Lutheran theologian Philip Hefner called AI a “technomirror” revealing our desires, and theologian Noreen Herzfeld says we make it “in our image.” We should reflect on what this means for us—we Christians believe we are made in the “image of God,” but what image have we been operating on?

God is ineffable. The Catholic philosopher Jean-Luc Marion says the image we have of God ought to point beyond it; otherwise it becomes an idol, reflecting back what we want to see. We ought to be careful with AI to not let it become an idol, taking the place of God, who is beyond our understanding, especially when icons of the Almighty sleep on our sidewalks and wait in breadlines.

John Potter
John G. Potter is content editor of Living Lutheran. He lives in St. Paul, Minn.

Read more about: