Viewing a single comment thread. View all comments

AlFrankensrevenge t1_jegka1o wrote

There are so many half-baked assumptions in this argument.

  1. Somehow, pausing for 6 months means bad actors will get to AGI first. Are they less than 6 months behind? Is their progress not dependent on our progress, so if we don't advance, they can't steal our advances? We don't know the answer to either of those things.

  2. AGI is so powerful that having bad guys get it first will "prolong suffering" I guess on a global scale, but if we get it 6 months earlier we can avoid that. Shouldn't we consider that this extreme power implies instead that everyone approach it with extreme caution the closer we get to AGI? We need to shout from the rooftops how dangerous this is, and put in place international standards and controls, so that an actor like China doesn't push forward blindly in an attempt at world dominance, only to backfire spectacularly. Will it be easy? Of course not! Is it possible? I don't know, but we should try. This letter is one step in trying. An international coalition needs to come together soon.

I'm quite certain one will. Maybe not now with GPT4, but soon, with whatever upgrade shocks us next. And then all of you saying how futile it is will forget you ever said that, and continue to think yourselves realists. You're not. You're a shortsighted, self-interested cynic.

0