Submitted by sigul77 t3_10gxx47 in Futurology
Surur t1_j57w5tj wrote
Reply to comment by songstar13 in How close are we to singularity? Data from MT says very close! by sigul77
I imagine you understand that LLM are a bit more sophisticated than Markov chains, and that GPT-3 for example has 175 billion parameters, which corresponds to the connections between neurons in the brain, and that the weights of these connections influences which word the system outputs.
These weights allows the LLM to see the connections between words and understand the concepts much like you do. Sure, they do not have a visual or intrinsic physical understanding but they do have clusters of 'neurons' which activate for both animal and cat for example.
In short, Markov chains use a look-up table to predict the next word, while LLM use a multi-layer (96 layer) neural network with 175 billion connections tuned on nearly all the text on the internet to choose its next word.
Just because it confabulates sometimes does not mean its all smoke and mirrors.
songstar13 t1_j58shuh wrote
Thank you for the more detailed explanation! I was definitely underestimating how much more complex some of these AI models have become.
Viewing a single comment thread. View all comments