The religious mystery of AI

From: POLITICO's Digital Future Daily - Tuesday Jul 18,2023 08:03 pm
How the next wave of technology is upending the global economy and its power structures
Jul 18, 2023 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

St. Patrick's Cathedral in New York is pictured. | AP Photo

AP Photo

Technology and religion might seem miles apart. But just as it is in education, labor or defense, AI is already having a huge impact on religious life. And religion, perhaps, could have an impact on AI — addressing questions such as whether AI should be granted some level of personhood.

In an op-ed recently published in Christianity Today Adam Graber, digital theologian and host of the “Device & Virtue” podcast, wrestled with the importance of generative AI and what he called “the rise of ‘BibleGPTs’” — large language models meant specifically to guide users through religious texts, which he says could “radically shape” how believers engage with the Bible.

Earlier revolutions, especially the printing press, triggered huge changes in religious practice that shook the social order, and Graber thinks chatbots could have a similarly profound impact on public life. Yesterday afternoon he and I discussed all that and more — including why he thinks that some of the secular discourse around AI development has taken on its own religious tinge. An edited and condensed version of our conversation follows:

What is “digital theology,” and how has the rise of generative AI changed it?

Digital theology raises two questions. One is, how does Christian theology and our understanding of Christian theology inform how we evaluate emerging digital technologies? And this could be broader than just Christian theology, there are other theologians doing this in their own respective religions. Then it goes the other way, and raises the question of how digital technologies shape both our theology and the very questions that we’re asking.

An example here would be with regard to generative AI systems. They're developing these more human qualities; how does that shape how we define what a person is? For Christians, they define a human as being in the image of God. What qualities make up that image of God in the person, and if AI systems have these qualities that look like intelligence, or sentience, or consciousness, does that mean that the image of God is no longer unique?

What is the best historical comparison for how this technology might change how people interact with religious texts?

Most people have compared the development of the internet to the printing press, in terms of developing a whole layer of infrastructure in order to distribute all of this content. I would say that comparison holds water. Now with AI systems, you have software that essentially goes viral across the internet in a content explosion that's happening over the span of 30 years, rather than 150 years. There’s a semantic overload of meaning… we become overloaded with these different interpretations, and we don't actually know how to make sense of what’s a legitimate interpretation or not. Even before AI systems we were struggling with that, and I think AI systems will amplify that even further.

The new large language models are programmed to produce, essentially, the average of human knowledge or communication about a topic. As you point out in your op-ed, that’s not at all how religious texts are written or taught. Does this technology pose a direct challenge to organized religion?

There’s a ton more work to do for theologians and pastors to really understand what these large language models are, what they do, and how they work before they can do any sort of evaluation.

Religious traditions have a long history with people working in the 20th and 21st century who are going back to early thinkers from the 1200s, or the 500s, or the 1600s. And that tradition is working through the thousands of theological minds and millions of Christians. Over time, they’re pulling out the theological positions that make the most sense for their tradition.

Large language models take all that language and make a mathematical model out of it. Instead of developing a frame of the world, they’re developing a frame of language, which is a representation of the world that is separate from it. I am persuaded by the idea that any meaning that we find in the output from a large language model is something that we are bringing to what we're reading, not something that a generative AI system intends for us to understand.

How do you think a more sophisticated integration of LLMs into religious life will change it?

Two possibilities come to mind.

One is that because these systems are built on probabilities, they're going to amplify the loudest Christian traditions, the traditions that have the most content out there. That means that these AI systems are going to be trained more on a Reformed, or a Presbyterian perspective than they are on a Mennonite or a Quaker perspective where they didn’t write much about their tradition.

The other thing, in a more potentially positive direction, is that because they drive people to the average you could see a centering of perspectives rather than the polarization we now have.

Why do you think that otherwise secular members of the AI community have developed quasi-religious ways of thinking about AI sentience, or its “soul,” or the “apocalypse”?

With this technology our reach exceeds our grasp, and we've created things that we don't understand. Because we don't understand them, we don't have the language for describing them or talking about them. When we don't have a language for what is essentially a mystery to us, we resort to religious language.

In her book God, Human, Animal, Machine Meghan O’Gieblyn draws a clear line with the word “transhumanism” — Dante's Divine Comedy, when it's translated into English uses the word “transhuman,” which is then picked up by a Jesuit priest, and then Julian Huxley picks it up and brings it into humanism, and then Ray Kurzweil from there, picks it up and develops the idea of transhumanism. There’s a clear lineage of theological and philosophical underpinnings that have shaped how we think about this technology.

Even in thinking about our own sentience and consciousness, we don't have an understanding or clear definition that all people agree on. So it moves us into the realm of mystery, and into the realm of religion.

 

JOIN 7/11 FOR A TALK ON THE FAA’S FUTURE: Congress is making moves to pass the FAA Reauthorization Act, laying the groundwork for the FAA’s long-term agenda to modernize the aviation sector to meet the challenges of today and innovate for tomorrow. Join POLITICO on July 11 to discuss what will make it into the final reauthorization bill and examine how reauthorization will reshape FAA’s priorities and authorities. REGISTER HERE.

 
 
a metaverse warning

University of Cambridge researchers are warning the U.K. that it needs to amend a new Online Safety Bill to address harms that might arise in the metaverse.

In a paper published today titled “Securing the Metaverse: Addressing Harms In Extended Reality,” author Shannon Pierson writes that those harms “If left unaddressed… will become entrenched into metaverse infrastructure and business models in ways that will be difficult, if not impossible, to untangle.”

So what are they? Pierson breaks them down into a few key categories: governance of the platforms themselves, biometric data collection and cybersecurity. With regard to governance, she argues the freewheeling and personalized nature of virtual spaces, combined with the rise of generative AI, demands an expanded definition of “content” in safety regulation to include VR content.

When it comes to biometric data, she recommends a thorough review of currently existing privacy regulation to ensure it covers the highly sensitive, personalized data that VR devices collect; on cybersecurity, she notes that criminals have already targeted both the devices themselves and the nascent NFT market that exists in the metaverse, and recommends that companies and regulators “commit to embedding security and privacy by design into metaverse products and services.”

chatgp-teacher's pet

Okay, ChatGPT is pretty smart. How does it match up to, say, the average Harvard freshman?

Maya Bodnick, a Harvard undergrad and intern for Matthew Yglesias’ Substack newsletter, put it to the test by asking eight professors and teaching assistants to grade essays the bot composed in response to prompts in their classes on everything from Latin American politics to the literature of Marcel Proust.

It did pretty well! It scored a 3.34 GPA, to be exact, recording four As, two Bs, and one C. “Nobody can predict the future, but if AI continues to improve at even a fraction of this breakneck pace, I wouldn’t be surprised if soon enough ChatGPT could ace every social science and humanities class in college,” Bodnick writes.

Bodnick writes that attempting to detect and prevent ChatGPT-assisted cheating won’t be easy — note the professor who recently, mistakenly flunked his class for cheating with ChatGPT when almost nobody had actually used it — and the bots invite an even bigger rethinking of how we educate students, and why. “Even if colleges can successfully prevent students from using ChatGPT to write their essays,” she writes, “that won’t prevent the AI from taking their jobs after graduation.”

Tweet of the Day

The real use of AI this campaign cycle will be to (attempt to, does anyone care here?) get mainstream media attention re the devious use of AI

the future in 5 links

Stay in touch with the whole team: Ben Schreckinger (bschreckinger@politico.com); Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); and Steve Heuser (sheuser@politico.com). Follow us @DigitalFuture on Twitter.

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

SUBSCRIBE TO POWER SWITCH: The energy landscape is profoundly transforming. Power Switch is a daily newsletter that unlocks the most important stories driving the energy sector and the political forces shaping critical decisions about your energy future, from production to storage, distribution to consumption. Don’t miss out on Power Switch, your guide to the politics of energy transformation in America and around the world. SUBSCRIBE TODAY.

 
 
 

Follow us on Twitter

Ben Schreckinger @SchreckReports

Derek Robertson @afternoondelete

Steve Heuser @sfheuser

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://www.politico.com/_login?base=https%3A%2F%2Fwww.politico.com/settings

This email was sent to by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Please click here and follow the steps to .

More emails from POLITICO's Digital Future Daily

Jul 17,2023 08:09 pm - Monday

Musk's big bet on anti-'woke' AI

Jul 14,2023 08:01 pm - Friday

5 questions for Paul Barrett

Jul 13,2023 08:02 pm - Thursday

Toward humanity's off-planet future

Jul 12,2023 08:08 pm - Wednesday

Algorithms get a new watchdog

Jul 11,2023 08:18 pm - Tuesday

Europe’s agenda for… not 'the metaverse'

Jul 10,2023 08:01 pm - Monday

Government-issued digital money gets closer

Jul 07,2023 08:03 pm - Friday

5 questions for Rumman Chowdhury