Technology and religion might seem miles apart. But just as it is in education, labor or defense, AI is already having a huge impact on religious life. And religion, perhaps, could have an impact on AI — addressing questions such as whether AI should be granted some level of personhood. In an op-ed recently published in Christianity Today Adam Graber, digital theologian and host of the “Device & Virtue” podcast, wrestled with the importance of generative AI and what he called “the rise of ‘BibleGPTs’” — large language models meant specifically to guide users through religious texts, which he says could “radically shape” how believers engage with the Bible. Earlier revolutions, especially the printing press, triggered huge changes in religious practice that shook the social order, and Graber thinks chatbots could have a similarly profound impact on public life. Yesterday afternoon he and I discussed all that and more — including why he thinks that some of the secular discourse around AI development has taken on its own religious tinge. An edited and condensed version of our conversation follows: What is “digital theology,” and how has the rise of generative AI changed it? Digital theology raises two questions. One is, how does Christian theology and our understanding of Christian theology inform how we evaluate emerging digital technologies? And this could be broader than just Christian theology, there are other theologians doing this in their own respective religions. Then it goes the other way, and raises the question of how digital technologies shape both our theology and the very questions that we’re asking. An example here would be with regard to generative AI systems. They're developing these more human qualities; how does that shape how we define what a person is? For Christians, they define a human as being in the image of God. What qualities make up that image of God in the person, and if AI systems have these qualities that look like intelligence, or sentience, or consciousness, does that mean that the image of God is no longer unique? What is the best historical comparison for how this technology might change how people interact with religious texts? Most people have compared the development of the internet to the printing press, in terms of developing a whole layer of infrastructure in order to distribute all of this content. I would say that comparison holds water. Now with AI systems, you have software that essentially goes viral across the internet in a content explosion that's happening over the span of 30 years, rather than 150 years. There’s a semantic overload of meaning… we become overloaded with these different interpretations, and we don't actually know how to make sense of what’s a legitimate interpretation or not. Even before AI systems we were struggling with that, and I think AI systems will amplify that even further. The new large language models are programmed to produce, essentially, the average of human knowledge or communication about a topic. As you point out in your op-ed, that’s not at all how religious texts are written or taught. Does this technology pose a direct challenge to organized religion? There’s a ton more work to do for theologians and pastors to really understand what these large language models are, what they do, and how they work before they can do any sort of evaluation. Religious traditions have a long history with people working in the 20th and 21st century who are going back to early thinkers from the 1200s, or the 500s, or the 1600s. And that tradition is working through the thousands of theological minds and millions of Christians. Over time, they’re pulling out the theological positions that make the most sense for their tradition. Large language models take all that language and make a mathematical model out of it. Instead of developing a frame of the world, they’re developing a frame of language, which is a representation of the world that is separate from it. I am persuaded by the idea that any meaning that we find in the output from a large language model is something that we are bringing to what we're reading, not something that a generative AI system intends for us to understand. How do you think a more sophisticated integration of LLMs into religious life will change it? Two possibilities come to mind. One is that because these systems are built on probabilities, they're going to amplify the loudest Christian traditions, the traditions that have the most content out there. That means that these AI systems are going to be trained more on a Reformed, or a Presbyterian perspective than they are on a Mennonite or a Quaker perspective where they didn’t write much about their tradition. The other thing, in a more potentially positive direction, is that because they drive people to the average you could see a centering of perspectives rather than the polarization we now have. Why do you think that otherwise secular members of the AI community have developed quasi-religious ways of thinking about AI sentience, or its “soul,” or the “apocalypse”? With this technology our reach exceeds our grasp, and we've created things that we don't understand. Because we don't understand them, we don't have the language for describing them or talking about them. When we don't have a language for what is essentially a mystery to us, we resort to religious language. In her book God, Human, Animal, Machine Meghan O’Gieblyn draws a clear line with the word “transhumanism” — Dante's Divine Comedy, when it's translated into English uses the word “transhuman,” which is then picked up by a Jesuit priest, and then Julian Huxley picks it up and brings it into humanism, and then Ray Kurzweil from there, picks it up and develops the idea of transhumanism. There’s a clear lineage of theological and philosophical underpinnings that have shaped how we think about this technology. Even in thinking about our own sentience and consciousness, we don't have an understanding or clear definition that all people agree on. So it moves us into the realm of mystery, and into the realm of religion.
|