Submitted by spiritus_dei t3_10tlh08 in MachineLearning
BrotherAmazing t1_j79rgi5 wrote
The subject line alone is an ill-posed question. Large language models are not inherently or intrinsically dangerous, of course not. But can they be dangerous in some sense of the word “dangerous” when employed in certain manners? Of course they could be.
Now if we go beyond the subject line, OP you post is a little ridiculous (sorry!). The language model “has plans” to do something if it “escapes”? Uhm.. no, no, no. The language model is a language model. It has inputs that are, say, text and then outputs a text response for example. That is it. It cannot “escape” and “carry out plans” anymore than my function y = f(x) can “escape” and “carry out plans”, but it can “talk about” such things despite not being able to do them.
Viewing a single comment thread. View all comments