Submitted by Neurogence t3_121zdkt in singularity
maskedpaki t1_jdojju1 wrote
ben goertzel will be an LLM denier forever. No matter how much progress LLMs make and how little progress his own pathetic opencog venture makes. He is best ignored I think.
Neurogence OP t1_jdolina wrote
I've been reading his writings and books for over a decade. He is extremely passionate about AGI and the singularity. His concern is that by focusing too heavily on LLMs, the AI community might inadvertently limit the exploration of alternative paths to AGI. He wants a more diversified approach, where developers actively explore a range of AI methodologies and frameworks, instead of putting all their eggs into the LLM basket, to guarantee that we can be successful in creating AGI that can take humanity to the great above and beyond.
fastinguy11 t1_jdonvci wrote
Do not worry then in just a few years we will have very big sophisticated improved LLMs with multi-modality(images and audio), if by then AGI is not here i am sure other venues will be explored. But wouldn't it be great if that is all it took ?
maskedpaki t1_jdom225 wrote
Those "other paths" have amounted to nothing
That is why people focus on machine learning. Because it produces results and as far as we know it hasn't stopped scaling. Why would we bother looking at his logic graphs that have produced fuck all for the 30 years he has been drawing them ?
UK2USA_Urbanist t1_jdoo8rm wrote
Well, machine learning might have a ceiling. We just don’t know. Everything gets better, until it doesn’t.
Maybe machine learning can help us find other paths that succeed it’s limits. Or maybe it too hits roadblocks before finding the real AGI/ASI route.
There is a lot of hype right now. Some deserved, some perhaps a bit carried away.
Villad_rock t1_jdpy2g5 wrote
Evolution showed there aren’t really different pathways to higher intelligence. Both vertebrate and invertebrates lead to high intelligence and devolution is hard or impossible, so the evolutionary brain would have been extremely lucky to get in the right direction two times just by luck and both seem to be basically the same. This leads me to believe there is only one way which can be build upon.
Ro1t t1_jdqc2kl wrote
No it doesn't at all, that's just how it's happened for us. Equivalent to saying the only way to store heritable information is through DNA, only way to store energy is carbs and fat. We literally just don't know.
lehcarfugu t1_jds3i46 wrote
They had a common descendent so I don't think it's reasonable to assume this is the only way to reach higher intelligence. Your sample size is one (planet)
Neurogence OP t1_jdonkja wrote
At some point, LLM's did not work because we did not have the computing power for it. The alternative approaches will probably lead to AGI. The computing power just might not be here yet.
maskedpaki t1_jdout9t wrote
"At some point LLMS did not work"
I'm sorry are you a time traveller ?
How do you know this ? GPT4 scaled above gpt3 and AI compute is still rising rapidly.
Trismegistus27 t1_jdp48t4 wrote
He means at some point in the past
FoniksMunkee t1_jdputbl wrote
Even MS are speculating that LLM alone are not going to solve some of the problems they see with ChatGPT's ability to reason. ChatGPT has no ability to plan, or to solve problems that require a leap of logic. Or as they put it, the slow thinking process that overseas the fast thinking process. They have acknowledge solutions proposed by other authors that have recognised the same issue with LLM's have suggested a different architecture may be required. But this seemed to be the least fleshed out part of the paper.
AsheyDS t1_jdov1ik wrote
Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.
maskedpaki t1_jdoyj5e wrote
I've been hearing the Neuro-symbolic cheerleading for 5 years now. I remember Yoshua bengio once debating against it and seeming dogmatic about his belief in pure learning and in how neurosymbolic systems wont solve all the limitations that deep learning has. I have yet to see any results and don't expect to see any. My guess is that transformers continue to scale for 5 more years at least and we will stop asking questions then about what paradigm shift needs to take place because it will be obvious that the current paradigm will do just fine.
Zer0D0wn83 t1_jdp68ky wrote
Exactly this. 10x the ability of GPT-4 may not be AGI, but to anyone but the most astute observer there will be no practical difference.
footurist t1_jdq23y4 wrote
I'm baffled neurosymbolic hasn't been attempted with a huge budget like OpenAI. You've got these two fields, with one you see it can work really precisely but breaks down at fuzziness, scaling and going beyond the rules. With the other you get almost exactly the opposites.
It seems like such a no brainer to make a huge effort trying to combine these in large ways...
DragonForg t1_jdp3eem wrote
LLMs are by there nature teathered to the human experience, by the second letter. Language. Without language AI can never speak to a human, or a system in that matter. Create any interface you must make it natural so humans can interact with it. The more natural the easier it is to use.
So LLMs are the communicators, they may not do all the tasks themselves, but they are the foundation to communicate with other processes. This can be done by nothing other than something trained entirely to be the best at natural language.
[deleted] t1_jdt2i8n wrote
Because machines don’t speak to humans through 1s and 0s? C’mon.
[deleted] t1_jdt2eug wrote
The AI community isn’t going to get to AGI without the financial backing the non-AI community. In that context it makes more sense to deploy a commercially successful LLM.
SolidFaiz t1_jdpinrx wrote
What is “llm”
Neurogence OP t1_jdpivst wrote
Large Language Models ---like ChatGPT.
GoldenRain t1_jdr38ub wrote
Even Openai says LLM are unlikely to be the path to AGI.
maskedpaki t1_je2lgig wrote
Ilya sutskever literally believes that next word prediction is general purpose so you are just wrong on this.
The only thing he is unsure about is if something more efficient than next token prediction gets us there first. It's hard to defend Gary marcus' view that gpt isn't forming real internal representations since we can see that gpt4 so obviously is.
vivehelpme t1_jdqalhs wrote
He shares the trait of several other "futurologist experts". Huge ego, long winded essays articles saying nothing at all.
They keep milking their fantasy AI cargo cult ecosystem for money and attention by pretending they are involved with the real world.
Viewing a single comment thread. View all comments