No-Performance-8745 OP t1_je42cpu wrote
Reply to comment by Sigma_Atheist in Commentary of the Future of Life Institute's Open Letter, and Why Emad Mostaque (Stability AI CEO) Likely Signed it by No-Performance-8745
Economic concerns are another issue posed by TAI, and I believe a pause on capabilities research could be of great benefit in this regard too in terms of better planning economically for a post-TAI society. I would however urge you to consider the existential threat of AGI as a greater potential negative than economic collapse, and as something that could be very real very soon. I also think that many efforts toward preventing existential catastrophes will help us in a regulatory and technical sense to combat economic crises too.
It is very likely that similar (if not identical) organizations could be used to combat both of these issues, meaning that by setting up measures against existential risks, we are doing the same for economic ones (and vice-versa).
Viewing a single comment thread. View all comments