Viewing a single comment thread. View all comments

maskedpaki t1_jdom225 wrote

Those "other paths" have amounted to nothing

That is why people focus on machine learning. Because it produces results and as far as we know it hasn't stopped scaling. Why would we bother looking at his logic graphs that have produced fuck all for the 30 years he has been drawing them ?

19

UK2USA_Urbanist t1_jdoo8rm wrote

Well, machine learning might have a ceiling. We just don’t know. Everything gets better, until it doesn’t.

Maybe machine learning can help us find other paths that succeed it’s limits. Or maybe it too hits roadblocks before finding the real AGI/ASI route.

There is a lot of hype right now. Some deserved, some perhaps a bit carried away.

20

Villad_rock t1_jdpy2g5 wrote

Evolution showed there aren’t really different pathways to higher intelligence. Both vertebrate and invertebrates lead to high intelligence and devolution is hard or impossible, so the evolutionary brain would have been extremely lucky to get in the right direction two times just by luck and both seem to be basically the same. This leads me to believe there is only one way which can be build upon.

1

Ro1t t1_jdqc2kl wrote

No it doesn't at all, that's just how it's happened for us. Equivalent to saying the only way to store heritable information is through DNA, only way to store energy is carbs and fat. We literally just don't know.

5

lehcarfugu t1_jds3i46 wrote

They had a common descendent so I don't think it's reasonable to assume this is the only way to reach higher intelligence. Your sample size is one (planet)

1

Neurogence OP t1_jdonkja wrote

At some point, LLM's did not work because we did not have the computing power for it. The alternative approaches will probably lead to AGI. The computing power just might not be here yet.

6

maskedpaki t1_jdout9t wrote

"At some point LLMS did not work"

I'm sorry are you a time traveller ?

How do you know this ? GPT4 scaled above gpt3 and AI compute is still rising rapidly.

−5

FoniksMunkee t1_jdputbl wrote

Even MS are speculating that LLM alone are not going to solve some of the problems they see with ChatGPT's ability to reason. ChatGPT has no ability to plan, or to solve problems that require a leap of logic. Or as they put it, the slow thinking process that overseas the fast thinking process. They have acknowledge solutions proposed by other authors that have recognised the same issue with LLM's have suggested a different architecture may be required. But this seemed to be the least fleshed out part of the paper.

5

AsheyDS t1_jdov1ik wrote

Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.

4

maskedpaki t1_jdoyj5e wrote

I've been hearing the Neuro-symbolic cheerleading for 5 years now. I remember Yoshua bengio once debating against it and seeming dogmatic about his belief in pure learning and in how neurosymbolic systems wont solve all the limitations that deep learning has. I have yet to see any results and don't expect to see any. My guess is that transformers continue to scale for 5 more years at least and we will stop asking questions then about what paradigm shift needs to take place because it will be obvious that the current paradigm will do just fine.

5

Zer0D0wn83 t1_jdp68ky wrote

Exactly this. 10x the ability of GPT-4 may not be AGI, but to anyone but the most astute observer there will be no practical difference.

7

footurist t1_jdq23y4 wrote

I'm baffled neurosymbolic hasn't been attempted with a huge budget like OpenAI. You've got these two fields, with one you see it can work really precisely but breaks down at fuzziness, scaling and going beyond the rules. With the other you get almost exactly the opposites.

It seems like such a no brainer to make a huge effort trying to combine these in large ways...

2