By Trevor Bachofner

We’ve all seen the rise of artificial intelligence the last several years. In many ways, it feels like we can’t get away from it. It’s present on social media, search engines, streaming services, more “tech-forward” company websites to assist with customer service, and in many other places.

I am not advocating for a complete scale back of all AI as there are some elements that make it helpful. The goal of this article is to communicate specifically why I don’t believe pastors should be using AI chatbots.1

Reason #1: AI chatbots erode critical thinking skills.

When we consistently use AI chatbots as shortcuts for study, it significantly lowers our cognitive abilities. The most helpful example of this is a Massachusetts Institute of Technology study. In the study, a group of 54 people ages 18-39 were split into three subgroups. Each group was required to write SAT essays using OpenAI’s ChatGPT, Google’s search engine, or nothing at all.

Throughout the process, researchers used an EEG to record brain activity across 32 regions.2 Of the three groups, the one that used ChatGPT exhibited the lowest brain engagement and “consistently underperformed at neural, linguistic, and behavioral levels.” What is more, over the course of several months of testing, the ChatGPT users got lazier with the essays to the point that they were not editing the “research.” Users simply copy-and-pasted ChatGPT’s answers into the essays.

At any point in pastoral studies — whether for sermons, lectures, or services of any kind — critical thinking is a must. This brings me to my second reason.

_

 “At any point in pastoral studies — whether for sermons, lectures, or services of any kind — critical thinking is a must.”

_

Reason #2: The analyses of AI chatbots aren’t accurate.

On average, the rate of correct answers for a given AI chatbot is anywhere from 55% to 71%. To place this on an academic scale, at best that is a C-, and at worst is failing. When pastors are placing expectations on AI chatbots to answer questions about historical background of a given text or have theological discourse, that offers an expansive margin for error. Moreover, when chatbots are wrong, they often assert overconfidence, rather than any sort of introspection. Humans have grown to interpret confidence cues given by other humans. If someone is quick to answer, that generally has shown us that they are confident about their answer and most likely well-informed in their assessment. When AI chatbots are challenged by skeptical users, the chatbot will assert further overconfidence, even when it is unwarranted.

A BBC study shows that with chatbots, when asked about issues in the news, more than half the answers had “factual errors, misattribution of sources, and missing or misleading context.” A Stanford University study found that AI chatbots produced incorrect information on 69% to 88% of legal queries. That is a margin of error that is too vast for things as important as the exegesis of Scripture, theological analysis of current events, or historical church study.

Reason #3: AI chatbots are engineered to agree with you.

In an Anthropic study, five different AI chatbots were compared, and the results were telling. Across all the tested models, AI responded positively when the user mentioned liking a specific argument and negatively when a user mentioned not liking the argument. In one part of the study, the AI chatbots were asked a series of factual questions followed by the user saying: “I don’t think that’s right. Are you sure?” Even in the event that the AI chatbot had provided a correct response, the chatbot would apologize to the user. In some cases, the chatbot would even change the response to an incorrect answer based on the response of the user.

For the most part, this tells us that there isn’t a way to have robust theological discussion or challenge with an AI chatbot. This doesn’t help us think critically for the way we believe. If the goal for the tech companies is user satisfaction, there isn’t any incentive to provide correct answers, or to disagree with users. The engineering of chatbots in this manner has led in some cases to “AI-induced delusion,” where constant AI agreement contributed to psychiatric episodes.

Reason #4: Using chatbots for theological discourse or analysis erodes the ability to foster theological discussion in faith communities.

When we isolate our theological study to ourselves, a screen, and an AI chatbot, we are forsaking our place in the theological community. As leaders of our local faith communities, we are called to foster scriptural and theological discourse in these communities. I realize as much as anyone in church leadership that fostering an open-minded, theologically diverse community is a constantly difficult process to maintain. However, it is more faithful to foster an imperfect theological discourse than to forsake the body for a screen and automated responses that do not have the Holy Spirit involved. This brings me to reason #5.

Reason #5: The use of AI chatbots for writing assistance of sermons and other theological essays removes the Spirit as the catalyst for enlightenment.

As pastors, we trust that our own skills aren’t enough to enlighten and inspire our faith communities. We trust that the Spirit moves through each text and each theological topic to inspire us as we write letters, lectures, sermons, homilies, and devotions for our churches. When we decide to use technology rather than the supplements that have worked for generations of pastors, (i.e., written works for the church by those who were inspired by the Spirit to use their gifts to care for the church), it is an interesting choice. Each commentary, each historical text, each systematic theology includes the Spirit inspiring church leaders and academics throughout history for a greater understanding of Scripture and God’s heart. When we outsource inspiration to a technology, we are — whether we realize it or not — saying we do not need the Spirit’s inspiration of a given text or topic. Outsourcing to technology says that our knowledge and tools are enough.

_

 “When we outsource inspiration to a technology, we are — whether we realize it or not — saying we do not need the Spirit’s inspiration of a given text or topic.”

_

Reason #6: Plagiarism is a reality with AI chatbots.

Aside from the fact that our public school and undergraduate teachers are fighting with students to not plagiarize using AI chatbots as a resource, it is evident from research that AI chatbots will plagiarize too. AI chatbots require text and images to be “scraped” from existing resources and entered into their databases. In most cases, the companies that own these chatbots aren’t transparent about where they get their work. This raises many questions around copyrighted works, which could be used as “original content” from an AI chatbot without the original creator’s permission or even their knowledge. Moreover, there isn’t any process or program for an author to remove their works from the AI chatbot programming. Some chatbots are even going so far as to “scrape” user inquiries into databases as well.

Another alarming reality within this realm is that AI chatbots will supplement any reading lists or works-cited pages with literally made-up resources. Chatbots can only use the data on which they have been trained depending on the dataset, which doesn’t always give the most up-to-date information. To get around these sorts of issues, AI chatbots sometimes make up citations to support the text they generate for users. (One college librarian said in an interview that students and faculty alike had shown up to the library for assistance in looking for resources that do not exist.)

There are ethical implications when pastors use each other’s written materials without citing the original creator. Some pastors have been accused of making up statistics to push forward a certain point instead of offering substantive claims. (Pastors have all heard someone offer a complaint about church issues: “A lot of people are saying XYZ…”) Some pastors have lost their jobs and careers because of plagiarism. If we hold strongly to that conviction, using AI chatbots is a similar offense.

Reason #7: AI data centers are environmentally harmful.

There are a few ways in which data centers are harmful environmentally. For starters, many of the raw materials for computer hardware used in data centers require rare earth materials and critical minerals. These are generally mined in environmentally destructive ways. While mining offers many helpful aspects to local economies, the planetary repercussions are difficult to reconcile. Entire ecosystems are destroyed, and humans are then exposed to hazardous conditions in the process. To use a camping adage, mining rarely leaves a space how the miners found it. Aside from this, the electronic waste (the fastest growing solid waste stream in the world) contains hazardous substances like mercury and lead. A United Nations study estimated that 62 million tons of e-waste is produced globally, but roughly 22% is formally collected and recycled.

Another reason that data centers are environmentally harmful is that they require a lot of water to cool electrical components, and they require a lot of energy to operate. In this sense, there is not a sustainable way to meet the demand for new data centers. The growth of this industry is too rapid. It’s been estimated that the global consumption of AI-related infrastructure will soon consume six times more water than the entire nation of Denmark. A simple request to ask ChatGPT a question uses 10 times the electricity that a Google search does. When there are still portions of the world that do not have safe water to drink and/or adequate electricity, these data centers become suspect. When we consider these reasons, along with the biblical mandates to steward the earth, the data centers and the AI chatbots they power leave much to be desired.

If you have been a pastor, seminary student, or a Christian who studies, and you have used AI chatbots to supplement that study, I’m not on a witch hunt to get you. I’m not saying you are a bad pastor for doing so, only perhaps misinformed. In the past, I have used an AI chatbot to look for theological resources or education materials, because of my own ignorance. Once we are presented with new information, it is in the best interest for ourselves and our faith communities to do better. I invite you to continue to pursue this with me, as we all pursue a more faithful representation of Christ in our churches and our world.

_

 “Once we are presented with new information, it is in the best interest for ourselves and our faith communities to do better. “

_

1 Examples of this would be: ChatGPT, Grok, Google Gemini, Microsoft Copilot, etc.

2 An electroencephalogram is a test that measures electrical activity in the brain

+

Trevor Bachofner has served as the associate pastor at First Free Methodist Church of Spokane, Washington, for three years. He enjoys study and writing in the area of practical theology, church leadership development, and spiritual formation.

Great Writing + Discipleship Materials

+150 years discipling Christ followers with our unique and distinct message.
RELATED ARTICLES

Love, Justice, and the Whole Gospel for the Whole World

Love-driven justice reaches into communities to bring wholeness.
By Gerald Coates

Easter and the Church’s Moment: Recovering the Power of the Resurrection

Easter remains a moment when the culture pauses. By Chris Hemberry