Submitted by mvujas t3_zo5imc in MachineLearning
CalligrapherFine6407 t1_j0o20wo wrote
Side Question:
Why does ChatGPT always sound so confident even when it's wrong?
Nameless1995 t1_j0olnhi wrote
One reason for confidence-sounding responses could be that internet data (in which it is trained) generally consists of confident sounding answers. Many humans are also confidently think they are righ while being wrong. Besides it doesn't have the ability nor is it exactly trained to model "truthfulness". So it may just maintain the confident-sounding style indiscriminately whether it's speaking truth or fiction (although it can probably adopt a "less confident" attitude if explicitly asked to role play as such but then it may just be less confident indiscriminately).
While OpenAI may have found some ways to make it more cautious (not necessarily adopting less confident styles, but denying response when more "uncertain" (probably based on perplexity or something IDK exactly how the enforce cautiousness)):
See:
https://openai.com/blog/chatgpt/
> ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows. > ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.
[deleted] t1_j0o2zvz wrote
[deleted]
Viewing a single comment thread. View all comments