2Punx2Furious
2Punx2Furious t1_j9ns2mx wrote
Reply to comment by AnakinRagnarsson66 in Is ASI An Inevitability Or A Potential Impossibility? by AnakinRagnarsson66
Hard to say. My prediction is around 2030.
2Punx2Furious t1_j9mh8fj wrote
AGI is ASI from the start, the distinction is probably meaningless.
Anyway, unless we go extinct, yes, it will happen.
2Punx2Furious t1_j8ew5hq wrote
Reply to comment by [deleted] in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
At best it gives us a small chance.
2Punx2Furious t1_j8dltav wrote
Reply to comment by MrCensoredFace in Anthropic's Jack Clark on AI progress by Impressive-Injury-91
Yes, if. But it looks like we aren't, for now.
2Punx2Furious t1_j6hp9nb wrote
Reply to comment by M00SEHUNT3R in A McDonald’s location has opened in White Settlement, TX, that is almost entirely automated. Since it opened in December 2022, public opinion is mixed. Many are excited but many others are concerned about the impact this could have on millions of low-wage service workers. by Callitaloss
The video says that there is an employee to "answer questions" (probably also to deter those things).
2Punx2Furious t1_j5lgr2b wrote
Reply to comment by Karmastocracy in OpenAI and Microsoft extend their partnership by Impressive-Injury-91
GPT 4. ChatGPT is just ChatGPT, and it's based on GPT 3.5
2Punx2Furious t1_j58f52a wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
> other people also running 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable) do not see this behaviour
Might be A/B testing for now.
2Punx2Furious t1_j4lbgjj wrote
Reply to comment by TrueBirch in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Yes, I'm saying the fact that there are edge cases doesn't matter, because it's not us who have to address them. As we get closer and closer to AGI, it will get better at handling them, we won't have to find them, and code solutions for them. I think it will be an emergent quality of AGI.
2Punx2Furious t1_j4kyyhq wrote
Reply to comment by TrueBirch in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
True that you don't need AGI to disrupt everything. But I don't think the edge cases matter, it's not like it will be coded manually.
2Punx2Furious t1_j3o7hps wrote
Reply to comment by datamakesmydickhard in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Yes, it's been like this for a while now.
2Punx2Furious t1_j3nzumw wrote
Reply to comment by TrueBirch in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
> We're a long way from a robotic farmhand being able to perform those skills, certainly not for a price comparable to a farm laborer.
If we get AGI, we automatically get that as well, by definition. Those you listed are all currently hard problems, yes, but an AGI would be able to do them, no problem.
The issue is, will AGI ever be achieved, and if yes, when?
I think the answer to the first one is simple, the second one not as much.
The answer (in very short) is: Most likely yes, unless we go extinct first. Because we know that general intelligence is possible, so I see no reason why it shouldn't be possible to replicate artificially, and even improve it, and several, very wealthy companies are actively working on it, and the incentive to achieve it is huge.
As for the when, it's impossible to know until it happens, and even then, some people will argue about it for a while. I have my predictions, but there are lots of disagreeing opinions.
I don't know how someone even remotely interested in the field could think it will never happen for sure.
As for my prediction/opinion, I actually give it a decent chance of it happening in the next 10-20 years, with probability increasing every year until the 2040s. I would be very surprised if it doesn't happen by then, but of course, there is no way to tell.
2Punx2Furious t1_j3mopda wrote
Reply to comment by TrueBirch in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Yeah, I see a lot of goalpost-moving, but in the end, it depends on how you define "AGI", some people have varying definitions. I think even a language model can become AGI eventually.
2Punx2Furious t1_j3l29nx wrote
Reply to comment by satireplusplus in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
And it's not even AGI yet. The singularity is closer than a lot of people think.
2Punx2Furious t1_j3l26ui wrote
Reply to comment by jsonathan in [P] I built Adrenaline, a debugger that fixes errors and explains them with GPT-3 by jsonathan
Even fine-tuning the prompt could get much better results. Prompt engineering is important.
2Punx2Furious t1_j3501j9 wrote
Reply to comment by ChronoPsyche in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
Sure, I just wanted to point that out. Sentience is of relatively low importance/impact to an AGI. It doesn't need to feel things to understand them, or value them.
2Punx2Furious t1_j34zjvb wrote
Reply to comment by ChronoPsyche in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It does have some perception. Just because it doesn't have the full sensory capability that (most) humans have, doesn't mean it has none. Its only input is text, but it has it.
Also, for "sentience" only "self-perception" is really necessary, by definition, which yes, it looks like it doesn't have that. But I don't really care about sentience, "awareness" or "consciousness". I only care about intelligence and sapience, which it seems to have to some degree.
2Punx2Furious t1_j34y066 wrote
Reply to comment by turnip_burrito in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
> you will sometimes find absurd contradictions like this
Quite often actually. Still, it's impressive that it still manages to sound plausible and eloquent enough.
2Punx2Furious t1_j34xuif wrote
Reply to comment by SeaBearsFoam in I asked ChatGPT if it is sentient, and I can't really argue with its point by wtfcommittee
It does, it's text that you give it as a prompt. That is an input, or in other words, a sensor. It is kind of limited, but sufficient to consider it "aware" of something.
2Punx2Furious t1_j34xlhk wrote
Yes, as I've always said, "aware" or "self aware" and "conscious" are just buzzwords, over-hyped but relatively useless terms. The real measure of intelligence, is, tautologically, nothing else but intelligence itself.
2Punx2Furious t1_j2x4lj5 wrote
Reply to comment by diamondsinmymouth in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
Ah, just a month ago? I would like to see results also in the long term. Maybe in 6 months or so. RemindMe! 6 months
2Punx2Furious t1_j2wxgfc wrote
Reply to comment by diamondsinmymouth in Asked ChatGPT to write the best supplement stack for increasing intelligence by micahdjt1221
What if you stop taking it for a while? Have you tried? Did you feel withdrawal?
2Punx2Furious t1_j1rfto7 wrote
Reply to comment by Ortus12 in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
> it will be powerful enough to make us enjoy being nice to each other and not enjoy telling mean jokes.
That sounds like lobotomy.
2Punx2Furious t1_j1qmwkg wrote
Reply to comment by Ortus12 in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
> Then there will still be a reason to watch human comics, and entertainers, we just won't be overwhelmed by the large scale division that this level of Ai could create.
Until the AI decides that we are no longer allowed to do that, because it goes against the values we gave it. That's one of the reasons why alignment is so hard, even if you think there are no downsides at first, some subtleties can be harmful when they become extreme.
2Punx2Furious t1_j1phb9b wrote
Reply to comment by DeMystified-Future in One thing ChatGPT desperately needs: An upgrade to its humor by diener1
In a way, that's good, it shows that we might have some hope at alignment. On the other, if they align AGI like this, the future will be very dull.
2Punx2Furious t1_jecz2b2 wrote
Reply to comment by WonderFactory in GPT characters in games by YearZero
I think you could get around the latency issue by having the generated dialogue come in form of letters that you receive in-game, which would feel a lot more natural than a slow conversation. Or have some cutscenes in between the prompt and the answers. As for the price, it should probably be an optional setting, and maybe the price should be offset by a subscription or ads, as much as I hate them, but in this case it would be difficult to do otherwise, unless you plan to foot the bill of your users forever.