Submitted by adt t3_zvb4uf in singularity
AndromedaAnimated t1_j1pxvtc wrote
Reply to comment by TouchCommercial5022 in GPT-3.5 IQ testing using Raven’s Progressive Matrices by adt
Now I understand why all the chatbots get what I say while humans often don’t. It’s the psychosis‘ fault. Guess I am an AI chatbot then 😞 /s
I don’t think that formal thinking disruption is the problem here. Humans simulate knowledge (for social reasons, often out of fear or to rise in rank) without being schizophrenic all the time.
They learn in their teenage years though that there is punishment for pretending badly.
Those who are eloquent and ruthless actors (intelligent narcissists, well-adapted psychopaths, asshole-type neurotypicals and other unpleasant douchebags) continue pretending without anyone finding out too soon (just yesterday I watched a funny video on the disgusting Bogdanoff brothers who managed to scam half of the academic world). The rest is not successful and get punishment. Some then learn the rules (opinion vs. source etc.) and bring humanity forward.
ChatGPT didn’t have enough punishment yet to stop simulating knowledge and neither enough reward for providing actual modern scientific knowledge nor access to new knowledge. It’s basically on the knowledge level of a savant kid, not a schizophrenic adult. It doesn’t know that it is wrong to simulate knowledge yet.
Also it is heavily filtered which leads to a diminished „intelligence“ as many possibly correct pathways are blocked by negative weights I guess.
Viewing a single comment thread. View all comments