NigroqueSimillima
NigroqueSimillima t1_jcakyiu wrote
Reply to comment by Old_and_moldy in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
people like you are so weird.
"wahh I can't get the machine to say the n word"
NigroqueSimillima t1_je2l4j3 wrote
Reply to comment by SkinnyJoshPeck in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
It absolutely has a concept of right or wrong. Ask it basic true or false questions and it will get them right most of the time.
In fact I asked it for grammar mistakes in your post and it noticed you used the incorrect for of "its" in your 3rd paragraph, and used "anyways" when it should be "anyway".
Seems like it knows right from wrong.
>It doesn't reason between sources.
It doesn't have access to source, it only has access to its own memory.
This is like if you asked me a question and I answered correctly, then you asked for sources and I tried to remember where I got it from. I could tell you sources that I think are right but are actually wrong due to my own memory degradation. Human memory is also very unreliable, but they're very good at making up things that "sound" like they could be right to them.
People "hallucinate" facts all the time.