LettucePrime OP t1_j9ntf72 wrote
Reply to comment by Clairvoidance in Question for any AI enthusiasts about an obvious (?) solution to a difficult LLM problem in society by LettucePrime
I had an enormous 10+ paragraph version of this very simple post discussing exactly some of those smaller LLMs, & while I'm not too familiar with Pygmalion, I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating. Effectively I argued that, because of economic & tech pressures, the AI industry is due for a contraction pretty soon, meaning that AI generated text would only come from an ever dwindling pool of sources as the less popular models die out.
I abandoned it before I got there, but I did want to touch on truly small scale LLMs & how fucked we could be in 3-5 years when any PC with a decent GPU can run a Russian Troll Farm.
Regarding privacy concerns, yeah. That's probably the best path to monetization this technology has at the moment. Training models on the business logic of individual firms & selling them an assistant capable of answering questions & circulating them through the proper channels in a company - but not outside it.
Surur t1_j9ntwmv wrote
> I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.
The training is resource intensive. The running is not, which is demonstrated by ChatGPT being able to support millions of users concurrently.
Even if you need a $3000 GPU to run it, that's a trivial cost for the help it can provide.
LettucePrime OP t1_j9nv0b8 wrote
Ehh no actually, that's not true. ChatGPT inferences are several times more expensive than your typical Google search, & utilize the same hardware resources used to train the model, operating at the same intensity, it seems.
Surur t1_j9nv53o wrote
That's not what I said lol. I said its manageable on hardware a consumer can buy.
LettucePrime OP t1_j9nw9fu wrote
I understand you now, my apologies.
Surur t1_j9nwi3k wrote
Sure, NP, and you are partially right also lol. It may cost closer to $80,000 to have your own ChatGPT instance.
https://twitter.com/tomgoldsteincs/status/1600196988703690752
But then that sounds like a business opportunity lol.
CubeFlipper t1_j9s5v9g wrote
>I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.
And once upon a time a useful computer would never fit in an average person's home. Ignoring all the other ways your store -everything idea wouldn't be effective, the cost of compute and efficiency of these models is changing so fast that by the time your idea was implemented, it would already be obsolete.
LettucePrime OP t1_j9sine9 wrote
Oh no that seems a bit silly to me. The last 15 years were literally about our global "store-everything" infrastructure. If we're betting on a race between web devs encoding tiny text files & computer engineers attempting to rescale a language model of unprecedented size to hardware so efficient it's more cost effective to run on-site than access remotely, I'm putting money on the web devs lmao
Viewing a single comment thread. View all comments