Submitted by fangfried t3_11alcys in singularity
throwaway_890i t1_j9txt4j wrote
Reply to comment by sideways in What are the big flaws with LLMs right now? by fangfried
When it doesn't know the answer it makes shit up that sounds very convincing.
I have found that when you ask "What is wrong with your answer?" when it is talking shit it tells you a problem with its own answer. When it knows the right answer it will be able to tell you what is wrong with the previous answer. I wonder whether this could be used to reduce the amount of shit it talks.
Viewing a single comment thread. View all comments