Viewing a single comment thread. View all comments

Sure_Cicada_4459 t1_jea3juc wrote

Good news, it's literally impossible. Even the assumption of that it's feasible to track GPU accumulation and therefore crack down on training runs above a certain size is very brittle. The incentive for obfuscation aside, we are just getting more and more efficient by the day meaning anyone will be able able to run GPT-n perf on their hardware soon. Even many signatories acknowledge how futile it is, but just want to signal that something needs to be done for whatever reasons (fill in your blanks).
Bad news, there is a non-trivial risk of this dynamic blowing up in our faces I just don't think restrictions are the way to go.

6

ShowerGrapes t1_jeaaln0 wrote

>we are just getting more and more efficient by the day meaning anyone will be able able to run GPT-n perf on their hardware soon.

yes, anyone will be able to train neural networks but not the kind to make simps like musk tremble with fear. open ai spent 7 million on cloud computing costs alone to train gpt. it would be a trivial (and misguided) task to shut down future ai development.

1

Sure_Cicada_4459 t1_jeace5e wrote

Actually no, https://twitter.com/summerlinARK/status/1599196885675544576?lang=de. And this is still an underestimate because predicting 10 years in algorithmic advances in the field of AI is silly. And that doesn't even account for distillation, more publicly available datasets and models, multi-LLM systems,... There are so many dimensions in which this train is running, it makes you dizzy thinking abt it and makes regulation look like nothing more then pure cope.

1

ShowerGrapes t1_jeakin8 wrote

it's a suggested pause of 6 months not 7 years. plus, i agree the pause a dumb idea.

1