Ducky181 t1_iw74m0s wrote
Reply to comment by lughnasadh in The CEO of OpenAI had dropped hints that GPT-4, due in a few months, is such an upgrade from GPT-3 that it may seem to have passed The Turing Test by lughnasadh
Besides just making the neural-network larger what other techniques could they employ to improve the accuracy of GPT-4 when compared to its predecessor GPT-3.
sext-scientist t1_iw77ylt wrote
Size is almost certainly the entire problem with these models. More recent research into how human brains process information has confirmed current generation language models have 6-9 orders of magnitude less compute than humans.
Hardware wise, hopefully 3D silicon and lower nm processes reduce the above gap in the next few years.
avatarname t1_ix5auxp wrote
I do wonder sometimes if our intelligence is just the question of scale of these things with some tweaking. We tend to think we are oh so imaginative and inventive and then on YouTube I discover that I have pretty much left the same comment only worded differently 13 years, 6 years back and now, on the same video that I forgot I had watched before :D
Viewing a single comment thread. View all comments