Submitted by justrandomtourist t3_ztjw7j in MachineLearning
idrajitsc t1_j1fr8i4 wrote
Reply to comment by Agreeable_Bid7037 in [D] Has anyone integrated ChatGPT with scientific papers? by justrandomtourist
It was, because a purported aid to scientific writing that confidently writes complete bullshit surprisingly has some downsides.
pyepyepie t1_j1hi1tv wrote
ChatGPT will do it too, it happily invented papers (with nice ideas! although it just merged two ideas most of the time) for me when I asked for it to write a literature review. Then again, we face the challenge of grounding correctly vs flexibility. My hypothesis is that the model was trained using feedback from non-domain experts as well, so unless we solve grounding fundamentally I would even go and say it is the expected behavior of the model. That is, it was probably rewarded to make facts that sound good even if incorrect, in comparison to facts that sound bad, which makes its hallucination trickier even if it happens less. No reason to think fine-tuning will solve it.
idrajitsc t1_j1i9pwf wrote
Yeah it's purely a language model, if its training has anything to do with information content and correctness it's gonna be very ad hoc and narrowly focused. All any of these models are really designed to do is sound good.
[deleted] t1_j1mers6 wrote
[deleted]
[deleted] t1_j1i10ne wrote
[deleted]
Viewing a single comment thread. View all comments