WarAndGeese t1_j9sj481 wrote
Reply to comment by [deleted] in [D] To the ML researchers and practitioners here, do you worry about AI safety/alignment of the type Eliezer Yudkowsky describes? by SchmidhuberDidIt
I agree about the callousness, and that's without artificial intelligence too. The global power balances were shifting at times of rapid technological development, and that development created control vacuums and conflicts that were resolved by war. If we learn from history we can plan for it and prevent it, but the same types of fundamental underlying shifts are being made now. We can say that international global financial incentives act to prevent worldwide conflict, but that only goes so far. All of the things I'm saying are on the trajectory without neural networks as well, they are just one of the many rapid shifts in political economy and productive efficiency.
In the same way that people were geared up at the start of the Russian invasion to Ukraine to try to prevent nuclear war, we should all be vigilant to try to globally dimilitarize and democratise to prevent any war. The global nuclear threat isn't even over and it's regressing.
Viewing a single comment thread. View all comments