Submitted by lolo168 t3_10vp7k8 in singularity
What do you think about John Carmack’s ‘Different Path’ to Artificial General Intelligence?https://dallasinnovates.com/exclusive-qa-john-carmacks-different-path-to-artificial-general-intelligence/
Submitted by lolo168 t3_10vp7k8 in singularity
What do you think about John Carmack’s ‘Different Path’ to Artificial General Intelligence?https://dallasinnovates.com/exclusive-qa-john-carmacks-different-path-to-artificial-general-intelligence/
we just need a team of a few math wizards to come up with better algorithms for training, matrix multiplications and whatever np problems are there in meta learning.. oh wait! we can just throw all our data into current AI and they will come up with the algorithms!!
this is how AGI will be achieved, there is no other way because humans don't get too many emmy noethers to come up with some new ways to do math. humans are busy with their short life and various indulgence.
Pretty much. It's also that those math wizards may be smarter than current AI but they often duplicate work. And it's an iterative process - AI starts with what we know, tries some things very rapidly. A few hours later it has the results and tries some more things based on that and so on.
Those math wizards need to publish and then read what others published. Even with rapid publishing like Deepmind does to a blog - they do this because academic publications take too long - it's a few months between cycles.
and we need this to cut that cost from $100bn to potato because biology runs on potato hardware, not a $100bn super computer. only if these pseudonerds realized it in the AI industry, we'd be expediting our search for more optimally converging networks.
We got 100b to spare on this. More than that. Might as well use it. Once we find working AGI we can work on power efficiency.
I think it's wise. Everyone is focusing on the LLM's. It's not good to put all your eggs in one basket.
LLMs are just the most popular, it's not the only thing the researchers are focusing on.
[deleted]
Thanks, great read. Carmack's an interesting guy; he's humble, but also not shy about his qualities and what he brings to the game. Its good we have people exploring alternative pathways to AI that are not in the same billion dollar tech giant wheelhouse. Seems almost cyberpunk.
The idea that its totally feasible for one guy to reach AGI and it could be the legend John Carmack gets me so pumped
Anyone have a list of the 40(ish) ML papers he was recommended...?
Start here https://lifearchitect.ai/papers/
PhD Penti O. Haikonen has also a different approach and has showed interesting results with a very cheap architecture. However his solution requires hardware neural networks, according to him doesn't work on software neural networks.
10K LoC? Sure if someone writes hundreds of supporting toolkits for that first. My friend Fred says that the pseudo code for better LLMs is just a few lines:
So let's say that you need one cent for each rule for a total of billion rules. With a thousand workers each producing 100K rules a year... It's doable for a billionaire. And you need seven similar schemes for other types of data. However I think AGI is not feasible in a decade. The hardware, software, data, and algorithms are not ready yet.
John Carmack has stated that the instructions for AGI won’t be that complicated, just a few thousand lines of code potentially
Seems like that should be the case as well since the brain doesn't "run on a lot of code" as far as we understand.
LLMs seem to be a top-down "reverse pipeline" method, form the intelligence from the interconnections of peoples intelligence through language.
it seems that jc is advocating more a classic bottom-up approach. ie create an artificial insect then mouse type brain and build up small modules.
the thing that stands out here is that it all seems to be with classical computer hardware and not some radical new hardware.
SoylentRox t1_j7j08r9 wrote
I don't think he'll succeed but for a very lame reason.
He's likely right that the answer won't be in solely transformers. However, the obvious way to find the right answer involves absurd scale:
(1) thousands of people make a large benchmark of test environments (many resembling games) and a library of primitives by reading every paper on AI and implementing the ideas as composible primitives.
(2) billions of dollars of compute are spent to run millions of AGI candidates - at different levels of integration - against the test bench in 1.
This effort would consider millions of possibilities - in a year or 2, more possibilities for AGI than all work done by humans so far. And it would be recursive - these searches aren't blind, they are being done by the best scoring AGI candidates who are tasked with finding an even better one.
​
So the reason he won't succeed is he doesn't have $100 billion to spend.