Viewing a single comment thread. View all comments

Rcomian t1_j0tfdpu wrote

Even if that rule was useful, i don't know how you'd enforce it.

it would require everyone who worked with AI to abide by this rule at every point in time. It would only take one researcher or even home user to break the rule to cause trouble.

It would require everyone who used an ai to not generate another ai with its code.

And how would you know that the safeguards you're putting in place are secure? if you're making a general purpose ai, it's basically a massive search algorithm, so you'd better be damn sure that every single way it could improve itself is locked out.

I don't know if you've found it but there's some great discussions on ai safety by Robert Miles on both his own channel and the computerphile channel: https://youtu.be/w65p_IIp6JY

it's pretty bleak 😅

3

basafish OP t1_j0tomi4 wrote

Also, how can you ensure that the AI won't find the vulnerabilities in your system itself, hack your system and change the source code itself. Basically impossible when AI reached a certain level of intelligence. 🤣

2