Submitted by spiritus_dei t3_10tlh08 in MachineLearning
mr_birrd t1_j77rkjd wrote
If a LLM model tells you it would rob a bank it's not that the model would do that could it walk around. It's what a statement that has a high likelihood in the considered language for the specific data looks like. And if it's chat gpt the response is also tailored to suit human preference.
DoxxThis1 t1_j77z3s1 wrote
A model can't walk around, but an unconstrained model could persuade gullible humans to perform actions on its behalf.
The idea was explored in the movie Colossus.
mr_birrd t1_j783pta wrote
Well very many humans can persuade gullible humans to perform actions on their behalf. Problem are people. Furthermore I actually would trust a LLM more than the average human.
Viewing a single comment thread. View all comments