I’ll show you techniques that you can adopt right now to limit ChatGPT from lying to you.
If you have used ChatGPT for any substantial amount of time, you’ll soon realize that it is an astonishingly convincing liar. It can make up information and say them in such authoritative voice that it’s difficult to differentiate them from truthful statement. There are many ways to partially prevent this but they requires using OpenAI API to adjust the settings or fine tune with example data.
However, there are few prompting techniques you can use right on ChatGPT interface. They will greatly reduce hallucination and you can use ChatGPT output more confidently.
Most people don’t understand what ChatGPT is
People think ChatGPT is an all knowing AI chatbot with up to the minute knowledge of the world
The truth is (at the risk of gross-oversimplification)
It’s just a program that predict the most likely next word.
It was trained with data up to September 2021
It was trained to answer your question instead of saying “I don’t know.” So when it doesn’t know, it can hallucinates aka BS its way out
Or they think ChatGPT is malicious and intentionally make up fake information
Some people found out the hard way. Last week, two US lawyers were fined for using ChatGPT in court documents without fact checking https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt
Fortunately, once you understand what ChatGPT is and is not, there are a few effective ways to combat hallucination you can employ right now
Give it space and time to think
Surprisingly, a technique to give better job interview can improve ChatGPT reliability significantly.
You simply need to ask ChatGPT to
think step by step and
explain its reasoning.
Prompt: I need you to do task <Task>. Let's think step by step and provide reasoning for each answer
Studies found that applying this simple trick can quadruple the accuracy!!!
Provide reference documents
Most people don’t realize they can provide reference document to ChatGPT to serve as ground truth.
If you have access to GPT-4 with plugin, here is how you do it
Use Bing web browsing
Select GPT-4
Select Browse with Bing
ChatGPT with GPT-4 and web browsing enabled
I want you to do the following
- Read the website at <URL>
- Determine if the information provided can be used to answer the question delimited by """
- If no, say "The website does not provide necessary information to answer the question"
- If yes, answer the question and provide your reasoning and quote from the page to support your answer.
"""
<Question>
"""
Use Plugin for PDF file
If you have local or online PDF file, you can provide it to ChatGPT by using a PDF plugin. I have tested and found good result with AskYourPDF plugin. Here is how to enable it
Select GPT-4 and Plugins
A new drop down will show up below the model selector, expand it and scroll to the bottom and select Plugin Store
Search for PDF and install one of the plugins. I have tested and had good experience with AskYourPDF
Then use this prompt
I want you to do the following
- Read a PDF at <URL>
- Determine if the information provided can be used to answer the question delimited by """
- If no, say "The PDF does not provide necessary information to answer the question"
- If yes, answer the question and provide your reasoning and quote from the page to support your answer.
"""
<Question>
"""
If the pdf is not accessible with a link, replace the first bullet point with
Read the uploaded pdf
The plugin will give you a link to upload local documents
Fact check pattern
The final prompt pattern that has served me well is the fact check pattern as detailed by White, Jules, et al.
From now on, when you generate an answer, create
a set of facts that the answer depends on that should
be fact-checked and list this set of facts at the
end of your output. Only include facts related to
<industry/topic>.
<Task>
Additionally, if you have Browse with Bing enable, ChatGPT can automatically fact check by browsing the web for you and provide link to relevant site
What’s the most outrageous ChatGPT hallucinations that you’ve come across and how you addressed them?
References
OpenAI. “Techniques to improve reliability” Github, 5 April, 2023, https://github.com/openai/openai-cookbook/blob/main/techniques_to_improve_reliability.md.
Creswell, Antonia, and Murray Shanahan. "Faithful reasoning using large language models." arXiv preprint arXiv:2208.14271 (2022).
White, Jules, et al. "A prompt pattern catalog to enhance prompt engineering with chatgpt." arXiv preprint arXiv:2302.11382 (2023).
About Trung Vu
Trung Vu, a former software engineer, founded Hoss in 2019 to enhance developer experiences, swiftly attracting Silicon Valley backers and a $1.6 million seed round. In 2021, his venture was acquired by Niantic Labs, of Pokemon Go fame, to bolster their Lightship platform.
Post-acquisition, Trung leads engineering teams at Niantic and invests in promising AI startups. An AI enthusiast even before ChatGPT's rise, he equates its potential to electricity. Through AI Growth Pad, his education platform, Trung teaches entrepreneurs to leverage AI for growth, embodying his commitment to ethical, transformative technology.