Is Generative AI Too Good to Be True? Tips for Ministry Leaders

Generative artificial intelligence (AI) tools like ChatGPT are becoming increasingly popular for content creation. If you haven’t heard the buzz about them yet, you probably will soon. But are these seemingly helpful tools too good to be true? If you and your staff are considering using generative AI in your ministry, it’s important to know the risks and drawbacks ahead of time. As you evaluate the costs and benefits of artificial intelligence, you may also consider developing guidelines for its use within your ministry.

What Is Generative AI?

Generative AI is a form of technology that produces new content based on a prompt from a user. Sounds a little like doing a web search, but it isn’t the same at all. When you search, you enter keywords or ask a question and the search engine suggests a variety of websites and sources for you to click into.

Generative AI is different. Generative AI tools typically provide responses based on information already in their system. Rather than generating thousands of results like a search engine might, a generative AI tool will combine multiple sources from the training data stored in its database and produce a single answer in the form of a chat.

Since these chatbots tend to pull from various snippets of information in their training data, you won’t know the exact source of the information it provides—or how many sources it used to generate a response.

In November of 2022, an artificial intelligence company in California launched one of the first widely available generative AI chatbots: ChatGPT. Within two months, ChatGPT reached 100 million active users.1 Meanwhile, many other generative AI tools, such as Bard and Pi, have launched more recently. As generative AI becomes commonplace, it’s important to have a full picture of how it works, what its drawbacks are, and how its use could impact your ministry.

Using Tools Like ChatGPT: What You Should Know

As a ministry leader, you may want to enlist tools like ChatGPT to help curate resources, brainstorm, or plan for services. However, as with any technology, it has several drawbacks you should keep in mind.

Using an AI tool for content creation comes with certain risks and ethical considerations. ChatGPT has drawn criticism for its limited knowledge base, particularly regarding recent information or current events, which can result in issues with accuracy and bias. Improperly using or relying on the tool could also create liability risks, especially regarding copyrighted language or proprietary information. If you experiment with ChatGPT, it is important to be aware of its risks and weaknesses. We’ve listed a few of them below to help you as you get started.

Credibility & Accuracy Issues
Generative AI is only as smart as the data it collects—and it doesn’t currently have the human ability to recognize or admit uncertainty. OpenAI, the company that created ChatGPT, lists inaccuracy as one of the tool’s most significant limitations. If ChatGPT receives a prompt it can’t easily answer, it may write a “plausible-sounding but incorrect or nonsensical” response.2

For example, if you ask ChatGPT for a series of quotes from a notable figure, like C.S. Lewis, the generated response may include five quotations—but upon further research, you could discover that one or more of the quotes cannot actually be attributed to C.S. Lewis. The tool has no way of communicating its lack of certainty or margin for error, so accepting its responses at face value could result in the spread of false information. That’s why it is important to double check all information that comes from generative AI tools before using it publicly.

Potential Bias
Tools like ChatGPT are designed to generate responses based on your specific prompt—which means the way you phrase a question could lead to a biased or misguided response. Remember, it's unlikely that ChatGPT fully grasps the context of your prompt. It may fail to consider critical red flags or helpful alternatives when generating a response.

If you use a generative AI tool to help produce content, avoid relying too heavily on its answers. Instead, ask it for alternative ideas or even ask if it has concerns with your initial inquiry. This will allow it to reveal unintended biases or perspectives you may have overlooked when asking the question. You can also try asking a question several different ways to see how responses change.

Copyright Infringement
Because current generative AI tools are limited in their ability to cite sources, the content they produce could result in authorship or ownership disputes. It seems likely that some of the training data ChatGPT uses when generating responses includes copyrighted texts, which also raises concerns about how those responses can be used.3

Image generators are another cause for concern regarding copyright infringement and ownership, especially those that rely on training data that includes copyrighted images used without permission. As a result, it’s possible for these tools to replicate distinct styles and imagery without proper licensing. In contrast, chat-based tools like ChatGPT are more collaborative because users can modify responses to make them unique—which may decrease the likelihood of copyright infringement. These issues are largely unsettled, so it's important to keep an eye out for further guidance, whether pending litigation or government entities like the U.S. Copyright Office.

If you choose to use generative AI tools to help with content development, consider taking steps to reduce copyright-related risks. This may include using a plagiarism checker with any content produced by an AI tool or editing an AI tool’s response to better fit your unique perspective, context, and circumstances.

Data Security & Privacy
Chatbots like ChatGPT gather personal data in two ways:

  1. Collecting user information such as IP address and browser information

  2. Logging user conversations to use as training data

Entering prompts into a generative AI tool may seem harmless, but it’s important to remember that tools like ChatGPT retain any information you share on their servers. If you accidentally include names, email addresses, or phone numbers when using ChatGPT, that data can be stored. It’s important to sanitize content by removing names and other identifying information.

When you use generative AI, the information in your prompts may be stored and used to train the system. ChatGPT allows you to turn off this function in your settings.4 This prevents the tool from using your information to train future responses and deletes your chat history after 30 days.

Even with safeguards in place, nothing is foolproof. To help ensure your ministry’s protection, avoid sharing personally identifiable, proprietary, or sensitive information with a generative AI tool. Don’t give it any information you wouldn’t share with the general public. Your input data could be shared with third parties without your consent—or worse, it could be subject to a security breach.

Wise Stewardship of AI Tools

When used with caution, generative AI models like ChatGPT can be a great resource for your ministry. They can provide creative suggestions for activities, devotionals, and other tools—but it is important to use them as collaborative tools, rather than relying on them at face-value. Engage with ChatGPT’s responses by asking for further clarity, specificity, or refinement. Be aware of ChatGPT’s most common pitfalls and avoid falling into them. This can minimize the risk of misinformation, copyright issues, and privacy breaches.

As ChatGPT and other artificial intelligence tools grow in popularity, it’s also important to protect against potential legal and liability issues. Most general liability insurance policies will cover intellectual property violations and can help protect you from the financial consequences of copyright infringement.

However, cyber liability is not covered in most general liability policies. In the event of a data breach, cyber liability coverage can help with costs associated with notifying people of a breach, emotional harm caused by the breach, and ongoing credit monitoring for compromised information.

Brotherhood Mutual allows you to package cyber liability as part of your commercial policy, making the addition of coverage seamless and simple for your ministry. If you, your staff, or your volunteers are engaging with ChatGPT, it’s important to have coverage that helps protect against a data breach that could cause significant financial losses.

The coverage description above does not provide coverage of any kind, nor does it modify the terms of any policy. For complete insurance policy details, please refer to the actual policy documents. Some coverages may not be available in all states.

1. ChatGPT Reaches 100 Million Users Two Months After Launch. Accessed June 23, 2023.
2. Introducing ChatGPT. Accessed June 23, 2023.
3. Copyright Chaos: Legal Implications of Generative AI. Accessed June 23, 2023.
4. Data Controls FAQ. Accessed June 23, 2023.

Additional Resource

Posted June 23, 2023
The information provided in this article is intended to be helpful, but it does not constitute legal advice and is not a substitute for the advice from a licensed attorney in your area. We encourage you to regularly consult with a local attorney as part of your risk management program.