Submitted by Kurohagane t3_1136cqx in singularity
I don't expect many people to see this as it seems most dissenting voices are heavily downvoted. But for those who are more open-minded, I would urge you to consider this message.
I'm about to graduate with a MSc in Machine Learning. The pace of the progress of the field is frankly absurd. I'm convinced that we might develop AGI within the next few years, and that as things are going now, it would most likely be really, really bad for us. Like, human extinction-level bad. As to why, I would urge you to read at least a bit of the r/controlproblem FAQ page, which explains in a very succinct way, at least much better than I could, why a benevolent AGI is the exception, and not the rule. It only takes a few minutes to read.
To me, it is quite apparent that if we are to create something smarter than us, we should approach it with utmost care. Why so many people here are so convinced that an AGI would be automatically beneficial is puzzling to me, I do not understand that leap of logic. I do want to create an aligned intelligence, that would be amazing and probably indeed usher in an utopian society. The whole crux of the problem is getting it right, because we literally only have one chance to get it right. It will be the last problem humanity faces, for better or worse.
I would urge anyone willing to listen to educate themselves on why AI alignment/safety is so important, and why it's so hard. Another good resource I would recommend is Rob Miles' youtube channel. Some of you may recognize him from his appearances on Computerphile.
I understand that some of you are convinced this would be the best thing to happen in your lifetime. But for me, personally, it fills me with a sense of dread and impending doom. Like climate change, but 100x worse and more imminent. I get that it's nice to be optimistic about it, but being so blindly accelerationist as to call anyone who goes "maybe we should be careful with this" a luddite is absurd.
Given that this might be the last few years of life as we know it, my plan for now is to enjoy the present and company of my loved ones while I still can. I think this C. S. Lewis quote is quite relevant and helpful in that regard.
Edit: If you're still unconvinced about the dangers of the AI but still open to more information, or if you just want some further reading, this is a very comprehensive and in-depth (and long) post explaining all the risks associated with AGI. https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities
Also, another good introduction to this topic is the pinned post in r/controlproblem