Submitted by 00001746 t3_1244q71 in MachineLearning
MootVerick t1_jdyj5x3 wrote
Reply to comment by nxqv in [D] FOMO on the rapid pace of LLMs by 00001746
If ai can do research better than us, we are basically at singularity.
spiritus_dei t1_jdz7rmz wrote
I think this is the best formulation of the question I've seen, "Can you imagine any job that a really bright human could do that a superintelligent synthetic AI couldn't do better?"
Everyone loves to default to the horse and buggy example and they always ignore the horse. Are programmers and researchers the blacksmiths or are they the horses?
It's at least 50/50 that we're all the horses. That doesn't mean that horses have no value, but we don't see horses doing the work they once did in every major city prior to their displacement by automobiles.
We also hear this familiar tome, "AI will create all of these news jobs that none of us can imagine." Really? That superintelligent AIs won't be able to do? It reminds me of a mixed metaphor. These two ideas are just not compatible.
Either they hit a brick wall with scaling or we all will be dealing with a new paradigm where we remain humans (horses) or accept the reality that to participate in the new world you become a cyborg. I don't know if it's possible, but may be the only path to "keep up" and it's not a guarantee since we'd have to convert biological matter to silicon.
And who wants to give up their humanity to basically become an AI? My guess is the number of people will shock me if that ever becomes a possibility.
I'm fine with retirement and remaining an obsolete human doing work that isn't required for the fun of it. I don't play tennis because I am going to play at Wimbledon or even beat anyone good - I play it because I enjoy it. I think that will be the barometer if there isn't a hard limit on scaling.
This has been foretold decades ago by Hans Moravec and others. I didn't think it was possible in my lifetime until ChatGPT. I'm still processing it.
Viewing a single comment thread. View all comments