wind_dude t1_j9rvmbb wrote
Reply to comment by royalemate357 in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
When they scale they hallucinate more, produce more wrong information, thus arguably getting further from intelligence.
royalemate357 t1_j9rzbbc wrote
>When they scale they hallucinate more, produce more wrong information
Any papers/literature on this? AFAIK they do better and better on fact/trivia benchmarks and whatnot as you scale them up. It's not like smaller (GPT-like) language models are factually more correct ...
wind_dude t1_j9s1cr4 wrote
I'll see if I can find the benchmarks, I believe there are a few papers from IBM and deepmind talking about it. And a benchmark study in relation to flan.
Viewing a single comment thread. View all comments