Submitted by donnygel t3_11rjm6h in technology
CactusSmackedus t1_jcdmi15 wrote
Reply to comment by chuntus in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
It's not, the commenter doesn't know what they're talking about. There's a paper out in the last few days (I think) showing that weaker systems can be fine tuned on input/output from stronger model and approximate the better models' results. This implies any model with paid or unpaid API access could be subject to a sort of cloning. It suggests that competitive moats will not be able to hold.
Plus (I have yet to reproduce since I've been away from my machine) APPARENTLY a Facebook model weights got leaked in the last week and apparently someone managed to run the full 60B weights model on a raspberry pi (very very slowly) but two implications:
-
"Stealing" weights continues to be a problem, this isn't the first set of model weights to get leaked iirc, and once you have a solid set of model weights out, experience with stable diffusion suggests there might could be an explosion of use and fine tuning.
-
Very very very surprisingly (I am going to reproduce it if I can because if true this is amazingly cool) consumer grade GPUs can run these LLMs in some fashion. Previous open sourced LLMs that fit in under 16Gb of vram are super disappointing because to get the model size small enough to fit on the card you have to limit the number of input tokens, which means the model "sees" very few words of input with which to produce output, pretty useless.
Now I don't think this year we'll have competitive LLMs running on GPUs at home, but, even if openAI continues to be super lame and political about their progress, eventually the moat will fall.
Also all the money to be made (aside from bing eating google) or maybe I should say most of the value is going to be captured by skilled consumers/users of LLMs not by glorified compute providers.
Viewing a single comment thread. View all comments