Submitted by BronzeArcher t3_1150kh0 in MachineLearning
BronzeArcher OP t1_j8z7yuo wrote
Reply to comment by mocny-chlapik in [D] What are the worst ethical considerations of large language models? by BronzeArcher
As in they wouldn’t interpret it responsibly? What exactly is the concern related to them not understanding?
currentscurrents t1_j8zz4n3 wrote
Look at things like replika.ai that give you a "friend" to chat with. Now imagine someone evil using that to run a romance scam.
Sure the success rate is low, but it can search for millions of potential victims at once. The cost of operation is almost zero compared to human-run scams.
On the other hand, it also gives us better tools to protect against it. We can use LLMs to examine messages and spot scams. People who are lonely enough to fall for a romance scam may compensate for their loneliness by chatting with friendly or sexy chatbots.
ilovethrills t1_j90noyx wrote
But that can be said on paper for thousands of things. Not sure if it actually translates in real life. Although there might be some push to label such content as AI generated, similar to how "Ad" and "promoted" are labelled in results.
mocny-chlapik t1_j91uejr wrote
Yeah, I mean people with mental ilness (e.g. schizophrenia), people with debilitatingly low intelligence and similar cases. Who knows how they would interact with seeminingly intelligent LMs.
Viewing a single comment thread. View all comments