Submitted by yazriel0 t3_10nhbfl in MachineLearning
mocny-chlapik t1_j69h5ud wrote
More and more information is popping up about the huge human annotation efforts going on at OpenAI. It seems that the secret ingredient missing was money, that could buy you lots of relevant data. This has several implications: (1) It might be impossible to replicate some of these models without millions of dollars invested in similar data collection efforts, (2) The range of applications can actually be broader than thought previously, if we are willing to pay people to generate the data. (3) They were not able to find significant improvements with scaling anymore. The scaling era might be nearly over.
visarga t1_j6aeq98 wrote
Scaling model size continues but obtaining more organic data is over, we are at the limit. So the only way is to generate more, but they need humans in the loop to check quality. It's also possible to generate data and verify with math, code execution, simulation or other means. And AnthropicAI showed a pure LLM way to bootstrap more data (RLAIF or Constitutional AI).
I bet OpenAI is just taking the quickest route now. For example, we know that using 1800 tasks in pre-training makes the model generalise to many more tasks at first sight (Flan T5). But OpenAI might have 10,000 tasks to train their model on, hence superior abilities. They also put more effort in RLHF, so they got a more helpful model.
Besides pure organic text, there are other sources - transcribed or described videos is a big one. They released the Whisper model and it's possible they are using it to transcribe massive video datasets. Then there are walled gardens - social networks generate tons of text, not the best quality though. There is also a possibility to massage data collection as game play and get people to buy into providing exactly what they need.
VirtualHat t1_j6bi3xk wrote
Video and audio might be the next frontier. Although, I'm not too sure how useful it would be. Youtube receives over 500 hours of uploads per minute, providing an essentially unlimited pipe of training data.
luaks1337 t1_j6chxhv wrote
Also spoken words differ a lot from thoughtful written text. Training on the 1:1 transcription would yield bad results in terms of grammar and readability. They could solve this by using a GPT model to rewrite the transcription but then you're training AI on AI which could lead to bias.
VirtualHat t1_j6ckblf wrote
I was thinking next frame prediction, perhaps conditioned on the text description or maybe a transcript. The idea is you could then use the model to generate a video from a text prompt.
I suspect this is far too difficult to achieve with current algorithms. It's just interesting that the training data is all there, and would be many, many orders of magnitude larger than GPT-3's training set.
luaks1337 t1_j6clz9v wrote
Ah, I thought you meant that video and audio would be the next step for text mining.
I believe OpenAI confirmed that they already work on a text to video model. My guess would be that current algorithms could do that but that it would be far to expensive to train on videos.
currentscurrents t1_j6btqta wrote
Frankly though, there's got to be a way to do with less data. The typical human brain has heard maybe a million words of english and about 8000 hrs of video per year of life. (and that's assuming dreams are generative training data somehow - halve that if you only get to count the waking world)
We need something beyond transformers. They were a great breakthrough in 2018, but we're not going to get to AGI just by scaling them up.
visarga t1_j6c1rmo wrote
Humans are harder to scale, and it took billions of years for evolution to get here, with enormous resource and energy usage. A brain trained by evolution is already fit for the environment niche it has to inhabit. But an AI model has none of that, no evolution selecting the internal structure to be optimal. So they have to compensate by learning these things from tons of raw data. We are great at some tasks that relate to our survival, but bad at other tasks, even worse than other animals or AIs - we are not generally intelligent either.
Also, most AIs don't have real time interaction with the world. They only have restricted text interfaces or APIs, no robotic bodies, no way to do interventions to distinguish causal relations from correlations. When an AI has feedback loop from the environment it gets much better at solving tasks.
vivehelpme t1_j6cno58 wrote
22 hours of video content per day?
currentscurrents t1_j6e4get wrote
I rounded. Data collection is like astronomy, it's the order of magnitude that matters.
MysteryInc152 t1_j6jkmus wrote
The human brain has trillions of synapses (the closest biological equivalent to parameters), is multimodal and evolution fine-tuned.
currentscurrents t1_j6m3ik5 wrote
We could make models with trillions of parameters, but we wouldn't have enough data to train them. Multimodality definitely allows some interesting things but all existing multimodal models still require billions of training examples.
More efficient architectures must be possible - evolution has probably discovered one of them.
londons_explorer t1_j6al3tb wrote
>They were not able to find significant improvements with scaling anymore.
GPT-3 has a window size of 2048 tokens ChatGPT has a window size of 8192 tokens. The compute cost is superliner, so I suspect the compute required for ChatGPT is a minimum of 10x what GPT-3 used. And GPT-3 cost ~12M USD. (At market rates - I assume they got a deep discount)
So I suspect they did scale compute as much as they could afford.
pancomputationalist t1_j69yowb wrote
Couldn't you train on the output of Codex itself? Might be legally dubious, but so is a lot of training of these AIs in the first place.
frequenttimetraveler t1_j6aewni wrote
It also means that a crowdsourcing effort will dwarf whatever effort openAi is buying
Viewing a single comment thread. View all comments