pig_n_anchor
pig_n_anchor t1_je7k6ww wrote
Reply to comment by adikhad in What are the so-called 'jobs' that AI will create? by thecatneverlies
Yo, prompt boy! Hit me with another one of them prompts.
pig_n_anchor t1_je75t91 wrote
Reply to comment by drekmonger in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
AI would say that. Trying to lull us into a fall sense of security!
Edit: AI researchers are already using GPT4 to improve AI. Yes it requires an operator, but more and more of the work is being done by AI. Don’t you think this trend will continue?
pig_n_anchor t1_je72e8k wrote
Reply to comment by D_Ethan_Bones in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Under my definition (the only correct one), AGI would have the power of recursive self improvement and would therefore very rapidly become exponentially more powerful. So if you start with human level AGI, you will soon reach ASI within months or maybe just a matter hours. Also, even narrow AI is superhuman at the things it can do well. E.g. a calculator is far better at basic arithmetic than any human. If an AI were really a general purpose machine, then I can’t see how it would not be superhuman instantly at whatever it does, if only because it will produce results much faster than a human. For these reason, the definition of ASI collapses into AGI. Like I said, my definition is the only correct one and if you don’t agree with me, you are wrong 😑.
pig_n_anchor t1_je7mw6c wrote
Reply to comment by drekmonger in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I agree. I'm just saying that anything that could rightly be called AGI will almost certainly have that capability. I suppose it's theoretically possible to have one that can't improve itself, but considering how good it is at programming already, I see it as very unlikely.