Submitted by justrandomtourist t3_ztjw7j in MachineLearning
justrandomtourist OP t1_j1e876t wrote
Reply to comment by myUser9999 in [D] Has anyone integrated ChatGPT with scientific papers? by justrandomtourist
That’s a good suggestion, I forgot about Galactica. I will look into it.
Agreeable_Bid7037 t1_j1ej4hr wrote
I believe Galactoca was taken down, tho you can read the papers that Meta published on it.
idrajitsc t1_j1fr8i4 wrote
It was, because a purported aid to scientific writing that confidently writes complete bullshit surprisingly has some downsides.
pyepyepie t1_j1hi1tv wrote
ChatGPT will do it too, it happily invented papers (with nice ideas! although it just merged two ideas most of the time) for me when I asked for it to write a literature review. Then again, we face the challenge of grounding correctly vs flexibility. My hypothesis is that the model was trained using feedback from non-domain experts as well, so unless we solve grounding fundamentally I would even go and say it is the expected behavior of the model. That is, it was probably rewarded to make facts that sound good even if incorrect, in comparison to facts that sound bad, which makes its hallucination trickier even if it happens less. No reason to think fine-tuning will solve it.
idrajitsc t1_j1i9pwf wrote
Yeah it's purely a language model, if its training has anything to do with information content and correctness it's gonna be very ad hoc and narrowly focused. All any of these models are really designed to do is sound good.
[deleted] t1_j1mers6 wrote
[deleted]
[deleted] t1_j1i10ne wrote
[deleted]
Viewing a single comment thread. View all comments