Submitted by dracount t3_zwo5ey in singularity
AsheyDS t1_j1wm5cr wrote
Reply to comment by Calm_Bonus_6464 in Concerns about the near future and the current gatekeepers of AI by dracount
>If we have beings infinitely more intelligent than us, there's no possible way we can retain control.
Infinitely more intelligent, sure. But no AI/AGI is going to be infinitely intelligent.
GalacticLabyrinth88 t1_j1x37lm wrote
Theoretically, AI/AGI can and will become infinitely intelligent relative to our organic perspective, because it will possess the ability of recursive self-improvement. It's already happening with AI art: the AIs responsible used to train from art produced by humans to create its own artworks, now it's using its previously created AI artworks to train on in order to create even better AI art, and so and so forth. AI will become more and more intelligent on an exponential scale because of how quickly it will be able to advance, able to think millions of times faster than the human brain, and arrive at solutions faster as well.
AI is like Pandora's Box. Once it's been opened, it can't be closed again.
No_Ask_994 t1_j1zbr9k wrote
Tbh training ai art in AI art isn't giving good results, at least for now.
It might be posible in the future with good ai filtering on the ai datasets to pick only the really good ones? Maybe....
But for now, it's a bad idea
Viewing a single comment thread. View all comments