Submitted by demauroy t3_11pimea in Futurology
Surur t1_jbyh7ee wrote
Reply to comment by Jasrek in ChatGPT or similar AI as a confidant for teenagers by demauroy
Why do you keep talking about hiding a bruise? The tweet is about a 13-year-old child being abducted for out-of-state sex by a 30-year-old.
The issue is that a while ChatGPT may present as an adult, a real adult would have an obligation to make a report, especially if presented in a professional capacity (working for Microsoft or Snap for example).
I have no issue with ChatGPT working as a counsellor, but it will have to show appropriate professional judgement first, because, unlike a random friend or web page, if does represent Microsoft and OpenAI, including morally and legally.
Jasrek t1_jbyi94y wrote
It's two tweets down in the same thread by the same guy. Did you finish reading what you linked?
In my experience, ChatGPT very blatantly presents itself as a computer program. I've asked it to invent a fictional race for DND and it prefaced the answer by reminding me it was a computer program and has no actual experience with orcs.
If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
Surur t1_jbyif2t wrote
> If your concerns would be met by the program beginning each conversation with a disclaimer of "I am a computer program and not a real life adult human being", then I'm perfectly fine with that and support your idea.
My concern is around children. A disclaimer would not help.
> If your concern is that a chat program needs to be advanced enough to have "moral and legal" judgement, well, I guess you can come back in 15 years and see if we're there yet.
I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
Jasrek t1_jbyiwdw wrote
>My concern is around children. A disclaimer would not help.
Then I'm still questioning what you think would help. Your suggestions so far have been to imbue a computer program with professional judgement, an understanding of morality and ethics, and safeguarding training.
If you know how to do this, you've already invented AGI.
>I don't think we need 15 years. Maybe even 1 is enough. What I am saying is when it comes to children a lot more safety work needs to happen.
You're more optimistic than I am. My expectation is that there will be a largely symbolic uproar because some kid was able to Google "how do I keep a secret" by using a chat program and nothing of any actual benefit to any children will occur.
Surur t1_jbyjw78 wrote
Do you think ChatGPT got this far magically? OpenAI uses Human FeedBack Reinforcement Learning to teach the neural network what kind of expressions are appropriate and which ones are inappropriate.
Here is a 4-year-old 1-minute video explaining the technique.
For ChatGPT, the feedback was provided by Kenyans, and maybe they did not have as much awareness of child exploitation.
Clearly, there have been some gaps, and more work has to be done, but we have come very far already.
Jasrek t1_jbykaqc wrote
I hope you're right. I've never seen anything good happen when people start screaming 'think of the children' about new technology. I'll check back in with this thread in a year, see how things have gone.
Viewing a single comment thread. View all comments