Submitted by Cool_Abbreviations_9 t3_123b66w in MachineLearning
was_der_Fall_ist t1_je3ng6m wrote
Reply to comment by bartvanh in [D]GPT-4 might be able to tell you if it hallucinated by Cool_Abbreviations_9
Maybe that’s part of the benefit of using looped internal monologue/action systems. By having them iteratively store thoughts and otherwise in their context window, they no longer have to use the weights of the neural network to “re-think” every thought each time they predict a token. They could think more effectively by using their computation to do other operations that take the internal thoughts and actions as their basis.
Viewing a single comment thread. View all comments