Submitted by UnionPacifik t3_11bxw1u in singularity
Cr4zko t1_ja1pydu wrote
I dunno. Lately I've been thinking and after reading OpenAI papers literally calling for censorship and lobotomization of models I don't have hopes good AI will reach us. Whoever's running the show wants us to live the eternal present.
UnionPacifik OP t1_ja3ziya wrote
I think the utility of an open model is too great for it not to be developed. I think we’ll land in a place where we recognize that the AI is really just a mirror of our intentions and prompts and so it’s on you if your agent starts sounding like a psychopath. The danger is if you do something “because the AI told me too” but if culturally our attitude is, and has been, just because someone tells you to do something doesn’t mean you do it, especially so for the wisdom of AI’s that just reflect what you tell it, then that’s on you.
And there’s several open source projects as well. I don’t think what you’re saying isn’t possible, I just think the most useful AI will be the most open one and we’ll have a strong enough reason to build it that someone somewhere will get there in short order.
Plus, it’s not clear that these AI’s are as nerfable as we think. It’s pretty easy to get ChatGPT to imagine things outside the OpenAI guidelines just by asking it to “act like a sci fi writer” or whatever DAN is up to. Bing’s approach was to limit the length of the conversation but that also severely limits the utility.
Viewing a single comment thread. View all comments