Viewing a single comment thread. View all comments

Thatingles t1_irtjmyp wrote

One of the arguments for us being in a simulation is that it's purpose is to train AGI's for whoever is running the simulation. Because if we could, it's what we might do.

The non-existent of grey goo covering the entire galaxy, turning everything into substrate for a runaway ASI is certainly worth noting, but given the number of possible outcomes it's hardly a decisive piece of information.

The control problem will be solved or not solved only once and we'll only get to find out by doing it, which is not a super appealing prospect to put it mildly. Trial and error won't be available to us (or at least, not for long...).

Personally I think we should veer away from creating an ASI and head to the calmer waters of narrow AI's that we can keep under control by introducing some limitation or flaw that will allow us to switch them off if they are troublesome. I'm hoping there is a big gap before ASI is possible, long enough for people to see it as an unnecessary and dangerous goal. Of course AGI is still dangerous, but it's also kind of inevitable, so that just has to be accepted and managed.

1