odigon t1_j5rbs4d wrote
By far the greatest danger of artificial intelligence is that it will be achieved before we know how to do it safely. General AI isnt here yet. Very good narrow AI is here: AlphaGo. StockFish, ChatGPT. General AI that can at least do everything a human can may be some decades away, maybe much more, maybe less. We know General AI is possible because we humans can do it and humans are not magic, they are physical system. It seems logical that if we can build a human level AI then we can increase the resources and build something that can outperform humans. What will this look like? What will it do? Whatever it does, will we be able to stop it if we dont like the result? I honestly dont think we will be able to; it will be able to fool us or force us to not stand in its way.
Here is a genuinely frightening series on AI safety by a guy called Robert Miles.
https://www.youtube.com/watch?v=pYXy-A4siMw&ab_channel=RobertMiles
Viewing a single comment thread. View all comments