Submitted by buggaby t3_11qgasm in MachineLearning
folk_glaciologist t1_jc68a4q wrote
You can use searches to augment the responses. You can write a python script to do this yourself via the API, making use of the fact that you can write prompts that ask ChatGPT questions about prompts. For example this is a question that will cause ChatGPT to hallucinate:
> Who are some famous people from Palmerston North?
But you can prepend some text to the prompt like this:
> I want you to give me a topic I could search Wikipedia for to answer the question below. Just output the name of the topic by itself. If the text that follows is not a request for information or is asking to generate something, it is very important to output "not applicable". The question is: <your original prompt>
If it outputs "not applicable" or searching Wikipedia with the returned topic returns nothing, then just reprocess the original prompt raw. Otherwise download the Wikipedia article (or first few paragraphs), prepend to original prompt and ask again. Etc.
In general I think that using LLMs as giant databases is the wrong approach because even if we can stop them hallucinating they will always be out of date because of the time lag to retrain them, we should be using their NLP capabilities to turn user questions into "machine-readable" (whatever that means nowadays) queries that get run behind the scenes and then fed back into the LLM. Like Bing chat doing web searches basically.
Viewing a single comment thread. View all comments