enn_nafnlaus
enn_nafnlaus t1_jdv2n2j wrote
Reply to [D] Can we train a decompiler? by vintergroena
Clever. Should be very possible.
enn_nafnlaus t1_jb7sxxi wrote
Reply to What is the future of AI in medicine? [D] by adityyya13
I can say this: my mother has struggled for many, many years trying to figure out what's wrong with her and causing her weird, debilitating symptoms. She finally, at long last got a diagnosis that her doctors are pretty confident in: advanced Sjögren's.
Out of curiosity, I punched her symptoms into ChatGPT, and - without access to any test results - Sjögrens was its #2 guess, and it suggested diagnostic tests that she had done and had shown it was Sjögrens. Sjögrens actually isn't super-rare (about a percent or so of the population has it), but usually much milder, and very underdiagnosed.
I think AI tools are seriously underappreciated with respect to proposing new lines of investigation on hard-to-crack cases.
enn_nafnlaus t1_jdv8gdn wrote
Reply to comment by yaosio in [D] Will prompting the LLM to review it's own answer be any helpful to reduce chances of hallucinations? I tested couple of tricky questions and it seems it might work. by tamilupk
If you want to make life hard on an LLM, give it a spelling task ;)
The public seems to think these tasks should be easy for them - after all they're "language models", right?
People forget that they don't see letters, but rather, tokens, and there can be a variable number of tokens per word. Tokens can even include the spaces between words. It has to learn the numbers and letters (in order) of every single token and how each one combines on spelling tasks. And it's not like humans tend to write out that information a lot (since we just look at the letters).
It's sort of like giving a vocal task to a deaf person or a visual task to a blind person.