skywalkerze t1_jdbo396 wrote
Reply to comment by JoieDe_Vivre_ in Google and Microsoft’s chatbots are already citing one another in a misinformation shitshow by altmorty
It's in beta because it's wrong too often, it's not wrong because it's in beta. Not like if they declare it done it will be wrong less often.
It's not finished, and at the current stage it's spreading misinformation. Sure, if they fix it we should use it. But as it is now... Maybe not.
cas13f t1_jdcclbf wrote
It's a language model. It's not meant to be a source of truth or even to be right about whatever it is prompted to write about. It's meant to generate text blurbs in response to a prompt using word association and snippets from it's ma-hoo-sive training set.
That's why it just makes up citations in many occasions, because all the model cares is that a citation is grammatically made up a certain way, and that the contents are associated to the prompt.
Also why it can't do math. It's a language model.
What people need to do is stop fucking using it as google because it is not a search engine and it does not give a single fuck about the veracity of the generated text, only that the generated text looks like it was taught to make it look.
Viewing a single comment thread. View all comments