Submitted by RedditPolluter t3_125nchg in MachineLearning
I've seen posts claiming this. There is a paper saying that self-scrutiny feedback loops can improve the performance of GPT-4 by 30%. I've experimented with feedback loops using the API and don't doubt that this can, or in future may be able to, produce emergent behaviour. I'm no expert but my surface-level understanding of transformers is that they would not create feedback loops just from prompting and would merely just respond as if they were.
If it were true, it would have significant economical implications since creating the feedback loop separately multiplies the price each loop.
itshouldjustglide t1_je4yj0x wrote
It seems to be capable of handling the request but it's hard to tell how much of this is just a trick of the light and whether it's actually doing the reflection. Would probably help to know more about how the model actually works.