Submitted by flowday t3_10gxy2t in singularity
sumane12 t1_j567fqu wrote
Reply to comment by BadassGhost in AGI by 2024, the hard part is now done ? by flowday
I agree, short term memory, and long term learning, will avoid hallucinations, it does look like gpt3+ WolframAlpha seems to have solved this problem, although it's not a perfect solution, but will do for now.
I'm very much an immediate takeoff proponent when it comes to ASI. Not only can it think at light speed (humans tend to think at about the speed of sound) it has immediate access to the internet, it can duplicate itself over and over as long as there is sufficient hardware, and it's able to expand its knowledge infinitely expandable as long as you have more hard drive space.
With these key concepts, and again I'm assuming an agent that can act and learn like a human, I just don't see how it would not immediately super human in its abilities. It's self improvement might take a few years, but as I say, I just think it's ability to out class humans would be immediate.
Viewing a single comment thread. View all comments