Submitted by fortunum t3_zty0go in singularity
CommentBot01 t1_j1g2eyj wrote
Most of the ppl talking AGI is far away don't say why so specifically and how to solve them... All they talk is about current limitation of llm and true understanding, consciousness BS. Gary Marcus complaining deep learning hit the wall and need hybrid approach but a few months later, many problems he claimed are solved or reduced. They dont release any better, significant, alternative research paper. If their thought and approach is that better, prove it.
fortunum OP t1_j1g4yvb wrote
Maybe I’m wrong here, is the purpose of this sub to argue the singularity is almost here? I made this post because I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity. Scientists can be wrong and are wrong all the time, everyone is always skeptical of your ideas. And I would say it is the contrary with singularity, I don’t have to give you a better, significant or alternative research paper lol. That is definitely not how this works. Outrageous claims require outrageous evidence
sticky_symbols t1_j1giyvw wrote
I agree with all of this. But the definition of outrageous is subjective. Is it more outrageous to claim that we're on a smooth path to AGI, or to claim that we will suddenly hit a brick wall, when progress has appeared to readily accelerate in recent years? You have to get into the details to decide. I'd say Marcus and co. are about half right. But the reasons are too complex to state here.
AdditionalPizza t1_j1ip56v wrote
What exactly spurred your post, something specific?
___
>I was looking for a more low-brow sub than r/machinelearning to talk about philosophical implications of AGI/singularity.
I would say this is a decent place for that. You just circumnavigate the posts/comments you don't feel are worth discussion. I almost never directly discuss about an upcoming singularity. The date we may reach a point of a technological singularity doesn't really matter, you can easily discuss the implications. A lot of people here are optimistic of the outcome, but there's plenty of people that are concerned about it too.
Personally I usually discuss the next few years with job automation because that's more tangible to me right now. The implications of LLM's and possible short-term upcoming advances are alarming enough I don't really even think about more than 10+ years away in AI.
Mokebe890 t1_j1haavc wrote
There are good articles on lesswrong that analyze why AGI is something that is coming at exponential rate. My background is psychology, and even in my field last 5 years mentioned topics like machine emotional intelligence and how to apply it, work with it and adapt to humans.
Viewing a single comment thread. View all comments