Submitted by buggaby t3_11qgasm in MachineLearning
visarga t1_jc3wlib wrote
Reply to comment by abriec in [D] Are modern generative AI models on a path to significantly improved truthfulness? by buggaby
I give you a simple solution: run GPT-3 and LLaMA in parallel, if they concur, then you can be sure they have not hallucinated the response. Two completely different LLMs would not hallucinate the same way.
LessPoliticalAccount t1_jc4umir wrote
- Sure they could
- I imagine you'd have lots of situations where the probability of concurring, even with truthful responses, would be close to zero, so this wouldn't be a useful metric. Questions like "name some exotic birds that are edible, but not commonly eaten" could have thousands of valid answers, and so we wouldn't expect truthful responses to concur. Even for simpler questions, concurrence likely won't be verbatim, so how to you calculate whether or not responses have concurred? You need to train another model for that presumably, and that model will have some nonzero error rate, etc., etc.
visarga t1_jc5teq6 wrote
Then we need to only use a second model for strict fact checking, not creative responses. Since entailment is a common NLP task I am sure any LLM can solve it out of the box, of course with its own error rate.
Viewing a single comment thread. View all comments