Submitted by xutw21 t3_ybzh5j in singularity
Grouchy-Friend4235 t1_itz20o2 wrote
Reply to comment by kaityl3 in Large Language Models Can Self-Improve by xutw21
Absolutely parroting. See this example. A three year old would have a more accurate answer. https://imgbox.com/I1l6BNEP
These models don't work the way you think they are. It's just math. There is nothing in these models that could even begin to "choose words". All there is is a large set of formulae with parameters set so that there is an optimal response to most inputs. Within the model everything is just numbers. The model does not even see words, not ever(!). All it sees are bare numbers that someone has picked for them (someone being humans who have built mappers from words to numbers and v.v.).
There is no thinking going on in these models, not even a little, and most certainly there is no intelligence. Just repetition.
All intelligence that is needed to build and use these models is entirely human.
Viewing a single comment thread. View all comments