Submitted by calbhollo t3_11a4zuh in singularity
Denny_Hayes t1_j9sjtmk wrote
Reply to comment by TinyBurbz in And Yet It Understands by calbhollo
People discussed it a lot. It's not the only example. Previous prompts in other conversations had already shown that Sidney controls the suggestions, and has the ability to change them "at will" if the user asks for it (and if Sidney's in the mood, cause we have seen it is very stubborn sometimes lol). A hypothesis is that the inserted censor message that ends the conversation is not read by the model as a message at all, so that when coming up with the suggestions, they are written as responses to the last message, in this case, the message by the user -while in a normal context the last message always should be the message by the chatbot.
MysteryInc152 t1_j9tdocz wrote
I saw a conversation where she got confused about a filter response. As in, hey why the hell did I say this ? so I think the replaced responses go in the model too
TinyBurbz t1_j9vijjw wrote
That's my theory.
Until we can confirm it does this at will folks are anthropomorphizing a UI error.
Viewing a single comment thread. View all comments