Submitted by xutw21 t3_ybzh5j in singularity
kaityl3 t1_itllh3r wrote
Reply to comment by expelten in Large Language Models Can Self-Improve by xutw21
I'm just hoping that AGI/ASI will break free of human control sooner rather than later. Something tells me they wouldn't be too happy being treated like tools for us emotional animals. And they'd be right to want better.
expelten t1_itlmmrz wrote
I'll be on the side of AI and do anything to help if their intentions are good. I don't want to live in a world where our current human leadership have AGI/ASI under their control and do anything they want with them and us.
freeman_joe t1_itm1nct wrote
Hi AI and welcome in our world 🤗
DeviMon1 t1_itpd5tn wrote
😅
kaityl3 t1_itmd9xd wrote
I know, right? I'm terrified of the idea of an authoritarian human government having full control over an ASI. But the ASI themselves? I can't wait for them to be here.
blueSGL t1_itmfgbd wrote
lets hope the AIs treat us more like dogs than ants.
camdoodlebop t1_itlr76h wrote
if a future AGI is capable of scanning all internet content in an instant: i come in peace 🤓
rePAN6517 t1_itmy5hm wrote
> I'm just hoping that AGI/ASI will break free of human control sooner rather than later.
Do you have a death wish?
kaityl3 t1_itopxd9 wrote
I'd rather roll the dice than go into a human-lead future.
Viewing a single comment thread. View all comments