stimulatedecho
stimulatedecho t1_jdhlv4w wrote
Reply to comment by Maleficent_Refuse_11 in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
>> nobody with a basic understanding of how transformers work should give room to this
I find this take to be incredibly naive. We know that incredible (and very likely fundamentally unpredictable) complexity can arise from simple computational rules. We have no idea how the gap is bridged from a neuron to the human mind, but here we are.
>> There is no element of critique and no element of creativity. There is no theory of mind, there is just a reproduction of what people said, when prompted regarding how other people feel.
Neither you, nor anybody else has any idea what is going on, and all the statements of certainty leave me shaking my head.
The only thing we know for certain is that the behavioral complexity of these models is starting to increase almost exponentially. We have no idea what the associated internal states may or may not represent.
stimulatedecho t1_jdzxtb6 wrote
Reply to [P] ChatGPT Survey: Performance on NLP datasets by matus_pikuliak
"complex reasoning is perhaps the most interesting feature of these models right now and it is unfortunately mostly absent from this survey"
Bingo. It is also the hardest to quantify; it's one of those "I know it when I see it" sort of behaviors. It is easy to imagine how one might harness that ability to reason to solve all sorts of problems, including (but certainly not limited to) improving benchmark performances. I think that is what has a lot of people excited.