Viewing a single comment thread. View all comments

Clairvoidance t1_j9ns42x wrote

There's the issue of locally run LLMs. It's even possible low-scale with models like Pygmalion, but it would be an even bigger issue if there wasn't low-scale models, as nothing would stop the richer people from having a language learning model on the downlow, or as funny as it sounds, there might even emerge some sort of black-market of LLM

people are also seemingly very careless about what they put into LLMs

4

LettucePrime OP t1_j9ntf72 wrote

I had an enormous 10+ paragraph version of this very simple post discussing exactly some of those smaller LLMs, & while I'm not too familiar with Pygmalion, I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating. Effectively I argued that, because of economic & tech pressures, the AI industry is due for a contraction pretty soon, meaning that AI generated text would only come from an ever dwindling pool of sources as the less popular models die out.

I abandoned it before I got there, but I did want to touch on truly small scale LLMs & how fucked we could be in 3-5 years when any PC with a decent GPU can run a Russian Troll Farm.

Regarding privacy concerns, yeah. That's probably the best path to monetization this technology has at the moment. Training models on the business logic of individual firms & selling them an assistant capable of answering questions & circulating them through the proper channels in a company - but not outside it.

4

Surur t1_j9ntwmv wrote

> I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.

The training is resource intensive. The running is not, which is demonstrated by ChatGPT being able to support millions of users concurrently.

Even if you need a $3000 GPU to run it, that's a trivial cost for the help it can provide.

3

LettucePrime OP t1_j9nv0b8 wrote

Ehh no actually, that's not true. ChatGPT inferences are several times more expensive than your typical Google search, & utilize the same hardware resources used to train the model, operating at the same intensity, it seems.

1

Surur t1_j9nv53o wrote

That's not what I said lol. I said its manageable on hardware a consumer can buy.

3

CubeFlipper t1_j9s5v9g wrote

>I know that the computing power necessary for the most successful models far outstrip what your average consumer is capable of generating.

And once upon a time a useful computer would never fit in an average person's home. Ignoring all the other ways your store -everything idea wouldn't be effective, the cost of compute and efficiency of these models is changing so fast that by the time your idea was implemented, it would already be obsolete.

1

LettucePrime OP t1_j9sine9 wrote

Oh no that seems a bit silly to me. The last 15 years were literally about our global "store-everything" infrastructure. If we're betting on a race between web devs encoding tiny text files & computer engineers attempting to rescale a language model of unprecedented size to hardware so efficient it's more cost effective to run on-site than access remotely, I'm putting money on the web devs lmao

1