Submitted by kvothekevin t3_1271vpb in Futurology
Evipicc t1_jee79u8 wrote
Reply to comment by robertjbrown in In a post-scarcity utopia, is there a real necessity of human labor of any kind? by kvothekevin
This is the take that drives me fucking insane...
We're not just going to roll over and AI just overtakes the world... That's not how this is going to work. We have ChatGPT that hired a person to beat a captcha and that TERRIFIED many top level AI developers and now there's this proposed moratorium until we discuss as a species how to move forward. Seriously, people think we're just going to attach AI to the nukes and end the world? Fear mongering and problem-focused thinking does nothing but stifle progression. If there's a problem, we fucking SOLVE IT. That's what we do.
robertjbrown t1_jefygr7 wrote
You think we're all just going to cooperate? "Discuss this as a species?" How's that going to work? Democracy? Yeah that's been working beautifully.
I don't think you've been paying attention.
You don't need to "attach AIs to the nukes" for them to do massive harm. All you need is one bad person using an AI to advance their own agenda. Or even an AI itself that was improperly aligned, got a "power seeking" goal, and used manipulation (pretending to be a romantically interested human is one way) or threats (do what I say or I'll email everyone you know, pretending to be you, sending them all this homemade porn I found on your hard drive).
GPT-4, as we speak, is writing code for people, and those people are running that code, without understanding it. I use it to write code and yes, it is incredible. It does it in small chunks, and I at least have the ability to skim over the code and see it isn't doing anything harmful. Soon it will do much larger programs , and the people running the code will be less experienced programmers than me. You don't see the problem there? Especially if the AI itself is not ChatGPT, but some open source one where they've taken the guardrails off? And this is all assuming the human (the ones compiling and running the code) is not TRYING to do harm.
I mean, go look in your spam folder. By your logic, we'd all agree that deceptive spam is bad, and stop doing it. Now think of if every spam was AI generated, knew all kinds of things about you, was able to pretend to be people you know, was smarter than the spam filters, and wasn't restricted to email. What if you came to reddit, and had no clue who was a human and who wasn't.
I don't know where your idealistic optimism comes from. Here in the US, politics has gone off the rails, more because of social media than anything. 30 years ago, we didn't have the ability for any Joe Blow to broadcast their opinion to the world. We didn't have algorithms that amplified views that increased engagement (rather than looking at quality) at a massive scale. We now have a government who is controlled by those who spend the vast bulk of their energy fighting against each other rather than solving problems.
Sorry this "drives you fucking insane", but damn. That's really, really naive if you think we'll all work together and solve this because "that's what we do." No, we don't.
Formal-Character-640 t1_jefte5z wrote
No one is saying that AI will be attached to nukes. Stop making up irrelevant points to appear credible.
The mass public deployment and rapid advancement of AI at a pace that we’re not prepared for or at a level of disruption that we don’t fully understand is the issue. It’s not fear mongering to demand that we do everything possible to guarantee safety and prosperity of this and future generations. And so far there is little to no action from the government. The open letter is just that.. a letter. This is a time-sensitive problem that we may not have a chance to fix if we fuck up now.
Viewing a single comment thread. View all comments