enn_nafnlaus t1_jdv8gdn wrote
Reply to comment by yaosio in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
If you want to make life hard on an LLM, give it a spelling task ;)
The public seems to think these tasks should be easy for them - after all they're "language models", right?
People forget that they don't see letters, but rather, tokens, and there can be a variable number of tokens per word. Tokens can even include the spaces between words. It has to learn the numbers and letters (in order) of every single token and how each one combines on spelling tasks. And it's not like humans tend to write out that information a lot (since we just look at the letters).
It's sort of like giving a vocal task to a deaf person or a visual task to a blind person.
Viewing a single comment thread. View all comments