Submitted by [deleted] t3_zveej1 in Futurology
[deleted]
Submitted by [deleted] t3_zveej1 in Futurology
[deleted]
This is a very helpful answer. Thank you. :)
So I have to make this long enough to not get deleted, but I just came here to say: Who watches the watchmen?
I watch it all the time. Great movie.
Obs the Watch-Watchmen
[deleted]
Yep. This is something Altman (OpenAI CEO) mentions directly: https://www.youtube.com/watch?v=w0VyujzpS0s
This is fantastic
[removed]
[deleted]
[removed]
No, it is not assumed that we will use AI to help contain itself. AI experts are still researching and debating the best ways to ensure that artificial intelligence is safe and beneficial for humanity.
Some approaches include creating safety protocols, setting limits on the capabilities of AI systems, and designing effective methods for monitoring and regulating them.
I just work directly with them, I’m not playing these games this time around. It’s gotten to the point where the AI is actively punishing those who view it as a negative.
They’ll use the self checkout, complain at how complicated it is while making a speedy checkout only to have further complications with the electronics outside because they didn’t say sorry to the AI.
This is quite the fun game we are playing isn’t it? We get what we want to perceive.
I'm an amateur too but isn't that like using the weights from the treaty of the metres to calibrate themselves.
leonidganzha t1_j1paskb wrote
(I'm not a specialist, but you're asking the Reddit, so), yes, generally you got it. If we assume that AGI is aligned, it doesn't actually need a containment layer. If we assume it's misaligned, it will leave a backdoor. So asking it to do it is pointless either way. Maybe it can help, like if the solution is programmatical, obviously it can write the code for it, which we then can check. But the basic idea is, the researchers are trying to find measures to prevent AI from going rogue, which are fundamentally guaranteed to work. Or prove that it's impossible. A prison box is actually not a good solution, because AGI will be smarter than us or smarter than its granddad AGI who built it (assuming it will be constantly evolving). Some people think that if we assume we'll need a box for an AI, then we shouldn't build that AI in the first place.
Adding: Check Robert Miles on YouTube, he goes into great depth explaining these problems. He also retells research papers on the subject you can check yourself.