Submitted by mrx-ai t3_zgr7nr in MachineLearning
liquiddandruff t1_izkq9l5 wrote
Reply to comment by Flag_Red in [R] Large language models are not zero-shot communicators by mrx-ai
one naive explanation is that since chatgpt is at its core a text predictor, by prompting it in such a way that it minimizes leaps of logic (i.e., make each inference step build slowly so as to prevent it from jumping to conclusions), it will be more able to respond coherently and correctly.
Viewing a single comment thread. View all comments