Submitted by Scarlet_pot2 t3_104svh6 in singularity
Scarlet_pot2 OP t1_j39cjh8 wrote
Reply to comment by Cryptizard in We need more small groups and individuals trying to build AGI by Scarlet_pot2
it was a small group of engineers at google. Not highly funded. They were trying to make something for google translate when they figured out they can make a program that guesses the next word.
visarga t1_j39xs2x wrote
No, this concept is older, it predates Google. Hinton was working on it in 1986 and Schmidhuber in 1990s. By the way, "next token prediction" is not necessarily state of the art. The UL2 paper showed it is better to use a mix of masked spans.
If you follow the new papers, there are a thousand ideas floating around. How to make models learn better, how to make them smaller, how to teach the network to compose separate skills, why training on code improves reasoning skills, how to generate problem solutions as training data... we just don't know which are going to matter down the line. It takes a lot of time to try them out.
Here's a weird new idea: StitchNet: Composing Neural Networks from Pre-Trained Fragments. (link) People try anything and everything.
Or this one: Massive Language Models Can Be Accurately Pruned in One-Shot. (link) - maybe it means we will be able to run GPT-3 size models on a gaming desktop instead of a $150,000 computer
Cryptizard t1_j39dcvq wrote
I can’t find any evidence of this happening.
Scarlet_pot2 OP t1_j39g574 wrote
https://en.wikipedia.org/wiki/Word2vec
"Word2vec is a technique for natural language processing (NLP) published in 2013 (Google). The word2vec algorithm uses a neural network model to learn word associations from a large corpus of text."
This was the first "guess the next word" model.
This next link is the "Attention is all you need" paper that describes how to build a transformer model for the first time.
These two discoveries didn't take millions or billions in funding. Made by small groups of passionate people, and their work led to the LLMs of today. We need to find new methods that would be similarly disruptive when extrapolated out.. and the more people we have working on it, the better chance we have of finding things like these. IMO these are parts of the future AGI, or at least important steps towards it. It doesn't take ungodly amounts to make the important innovations like these
Cryptizard t1_j39gpo3 wrote
They all have PhDs in AI though…
Scarlet_pot2 OP t1_j39hw2h wrote
Lets say there's a group of passionate PhDs self funded, over time they have a chance of 20% of finding a innovation or discovery in AI.
now let's say there is another group of intermediate and beginners, self funded, over time they have a 2% chance of making a discovery in AI.
But for the second example, there is 10 of those teams. All the teams mentioned are trying different things. If the end goal is advancement towards AGI, they all should be encouraged to keep trying and sharing right?
Cryptizard t1_j39jqjy wrote
I am claiming, though, that amateurs and enthusiasts are incapable of contributing to state-of-the-art AI. There is too much accumulated knowledge. If it was a low, but possible, chance to just make AGI from first principles it would have already happened sometime in the last 50 years that people were working on it. If, however, it is like every other field of science, you need to build the next thing with at least deep understanding of the previous thing.
Your examples might not have had a lot of money, but they all certainly were experts in AI and knew what they were doing.
jokokokok t1_j3hkymd wrote
Fyi Word2vec is just a type of word embedding, which has a much older history.
Viewing a single comment thread. View all comments