Rofel_Wodring t1_jblcp2k wrote
Honestly? It's unclear. If you asked me five or even two years ago -- I'd vote for dystopia.
But AI is advancing extremely, extremely quickly. More quickly than I'd ever dreamed. The billionaire overlords just straight up might not have the time to deploy their infinitely loyal robot cops in a way to employ control. Because AI advanced so quickly that rank-and-file nobodies can deploy comparable resources against the overclass.
I especially claim this because I don't think the future of AI will look anything like we've seen in classical sci-fi. It'll be less like Terminator or the AI movie or even The Matrix and more like... more like a cheesy isekai anime. This is because distributed intelligence is advancing much more quickly than the unitary intelligence we have so many AI characters from.
​
So I don't know where this is all going to lead. It might lead somewhere really bad. But I can guarantee you that if humanity does meet its end (and it's not in the next decade), it won't be from traditional calamities like disease or nuclear warfare or even climate change.
When Chat-GPT4 comes out next week, I think THAT will be the turning point for other people realizing that our old politics and perspectives won't serve us.
EDIT: GPT-4. Okay, so much my timeline up by a few weeks.
Rofel_Wodring t1_jbld7b0 wrote
I can still think of a lot of ways this can go very, very poorly. For example, we hit a computational bottleneck way earlier than space colonization allows us to expand -- with all of the resource crises that enable, especially since intelligence will now be a resource.
Regardless, the dystopia won't look anything like it did in classic sci-fi movies.
Viewing a single comment thread. View all comments