Submitted by RamaSchneider t3_10u9wyn in Futurology
RamaSchneider OP t1_j7auj2f wrote
Reply to comment by Sirisian in What happens when the AI machine decides what you should know? by RamaSchneider
The heart of my argument in this specific thread is: I agree that right now we're providing the basis for AI learning. That will most probably not be true in the future simply because the ability of computers to collect, collate and distribute any info dwarfs that of humans.
Yes, today you are correct. My point is that I don't believe that lasts. (And yes - I do think the evidence supports me)
Vorpishly t1_j7bgjv9 wrote
What evidence shows your point though?
RamaSchneider OP t1_j7exarz wrote
All you have to do is track computer data gathering and dissemination since the 1950s. We have Wall Street transactions that a human would never have time to be aware of.
Every bit of evidence we have regarding computers screams that we're nowhere near the end of the line.
Isabella-The-Fox t1_j7eidnk wrote
AI eventually being able to decide for itself is pure speculation. Us humans build it, we control what it do. Right now we have AI that "writes" code, infact it's run through open AI. It's called github copilot. I put write in quotes for a reason. This code it writes is just a algorithm taking from github, meaning if the AI tried to write code for itself, it'd run into errors and flaws (And has run into errors and flaws while being used. Source: I had a free trial). A AI will never be fully intelligence even when it seems like it is. Once it seems like it is, it really still isn't, at least compared to a human being. Us humans will always dictate AI
Viewing a single comment thread. View all comments