Comments
acutelychronicpanic OP t1_jefnjxc wrote
Share it and talk about it!
Bewaretheicespiders t1_jeg2m6g wrote
Imagine if we tried to halt the development of the tractor because of potential impacts on farm work.
acutelychronicpanic OP t1_jefa2tg wrote
It is critical that AI development not be concentrated in the hands of only a few big players. Large corporations, military research labs, and authoritarian regimes will not pause their research, only hide it. There is too much on the table.
By enabling the distribution of the development of AI research, particularly with regards to alignment, we can ensure that AI will be more likely to serve everyone.
Concentrated development amplifies the risks of AI catastrophe by setting up a fragile system where, when AGI is developed, even a minor misalignment may be unfix-able because there are no counterbalancing forces.
Distributed development means that yes, there will be more instances of mistakes and misuse, but these will be more limited in scope and less likely to lead to total human extinction or subjugation by an AGI system that *almost* shares our views.
We may be some years off from real AGI now, which is why this is a critical time to ensure the distribution of the technology to prevent any single factions or actors from acquiring such a lead that they can set the terms of our future.
The above are my thoughts on the matter and do not represent the views of LAION (which I am not affiliated with), although there is overlap.
FuturologyBot t1_jefeb2m wrote
The following submission statement was provided by /u/acutelychronicpanic:
It is critical that AI development not be concentrated in the hands of only a few big players. Large corporations, military research labs, and authoritarian regimes will not pause their research, only hide it. There is too much on the table.
By enabling the distribution the development of AI research, particularly with regards to alignment, we can ensure that AI will be more likely to serve everyone.
Concentrated development amplifies the risks of AI catastrophe by setting up a fragile system where, when AGI is developed, even a minor misalignment may be unfix-able because there are no counterbalancing forces.
Distributed development means that yes, there will be more instances of mistakes and misuse, but these will be more limited in scope and less likely to lead to total human extinction or subjugation by an AGI system that *almost* shares our views.
We may be some years off from real AGI now, which is why this is a critical time to ensure the distribution of the technology to prevent any single factions or actors from acquiring such a lead that they can set the terms of our future.
The above are my thoughts on the matter and do not represent the exact views of LAION, although there is overlap.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/127q9vm/petition_for_keeping_up_the_progress_tempo_on_ai/jefa2tg/
TemetN t1_jefmruc wrote
This is getting a disappointing lack of attention, given it's a good idea on multiple levels given how much potential this technology has and how this actually addresses potential issues.