Viewing a single comment thread. View all comments

TouchCommercial5022 t1_j1mq4cf wrote

Filing Statement;

"The long-term trend has been that new technologies tend to exacerbate precariousness. Large, profitable industries typically turn away new entrants until they incorporate emerging technologies into their existing workflows."

This article is a very interesting way to look at the generative AI revolution of 2022. As with previous IT revolutions such as social media, it will be the profit interests of the business that are likely to prevail in how that this technology shapes our future.

Bladerunner dystopia confirmed, understood.

I can't imagine how much cost they're racking up, I mean, they're already monetizing GPT-3, so I guess it's pretty clear what they're going to do next. This is the "gain publicity and users" phase. Making money will come soon enough

They earn a lot of money from user and company subscriptions to access ChatGPT and their other services. It's free right now, but it won't be for long.

this is also why they are trying to remove Stable Diffusion, to incorporate it into the next Adobe release or something

I really appreciate and admire what Stable Diffusion did. A few weeks after Dall-E and Midjourney made the rounds with their paid private service, they simply went out and released their work openly, open source and free to play at home. They threw a whole new "industry" that was just beginning to capitalize, under the bus. The fucking rich who invest in Dall-E must have been furious.

So now only the rich will benefit from AI, the poor will eat shit as usual.

And with AI replacing the poor, very soon the rich won't need the poor at all.

but I would pay for chatgpt;

I can afford GPT3 right now: 50,000 tokens (about 37,500 words, input and output counted) for $1. GPT3 is almost as good in many ways as GPT chat.

$50 will get you 2.5 million tokens or about 2 million words. An average page contains 500 words. So let's say your average query is half a page, 250 words. So those $50 = 10K individual inquiries.

So basically you can buy it at that price right now (in the form of CPT3), except you don't pay monthly, you pay per token, so you could spread out those 10K queries over many months if you wanted.

I suspect that chatGPT would have similar prices.

What I really can't wait to see and use is GPT4.

The genie is out of the bottle. It's all open source, so unless they start banning personal computer ownership and co-op working, there's going to be weird AI for the masses for the foreseeable future.

The analogy with social networks is incorrect, because social networks require everyone to be on the same network. The best analogy is the app store. There will be big players and little players, but getting locked out will only happen in the most extreme cases, and those guys will continue to thrive in their own corners.

15

visarga t1_j1my2tt wrote

If you want chatGPT to incorporate information from sources, you have to paste search results into the context. This can easily get 4000 tokens long. For each interaction afterwards, you pay the same 4000 tokens price as the history is very long. You would have to pay $1 after 10 replies.

You would need to do this when you want to summarise, or ask questions based on a reference article, or just use chatGPT as your top level above search, like you.com/chat

It's not cheap enough to use in bulk, for example to validate Wikipedia references. You'd need to call the model for millions of times.

12

blueSGL t1_j1n8084 wrote

They seem to be getting clever esp around certain concepts, I doubt they have hard coded training around [subject] such that the returned text is always [block text from openAI] more that they have trained it to return [keyword token] when [subject] gets mentioned and that is what pulls in the [block text from openAI]

you can bet they are going to work hard with every trick they can think of to remove inference cost, having a lookup table for a lot of common things and getting the model to return a [keyword token] that activate these would be one way of going about it.

Also likely how this sort of system would work in a tech support field. You don't need the system waxing lyrical over [step (n)] you just need to tell customer to perform [step (n)] with maybe a little fluff at the start or the end to make things flow smoother.

1

SnipingNinja t1_j1ni87i wrote

Look at Google's CaLM, it's trying to solve this exact issue afaict

2

breadsniffer00 t1_j1ogu5g wrote

“Only the rich will benefit from AI”. Average ppl were using GPT-3 before ChatGPT. It’s inexpensive and they even gave $18 in free credit. Not everything has to fit this dystopian anti capitalist narrative you’re creating

8

imlaggingsobad t1_j1oqq9c wrote

Also this is very early days still. Computers also started off expensive, so did phones, gaming consoles, and TVs. But now we have a huge market with many affordable options.

2

GuyWithLag t1_j1q43hg wrote

There's good indications that one can trade-off training time and corpus size against model size, making the post-training per-execution cost smaller.

Note that ChatGPT is already useful to very many people; but training a new version takes time, and I'm guessing that OpenAI is currently still in the iterative development phase, and each iteration needs to be short as it's still very early in the AI game.

1

DukkyDrake t1_j1mu3sl wrote

>I can't imagine how much cost they're racking up

I've seen estimates from $3m/day to $3m/month for chatGPT compute.

>average is probably single-digits cents per chat; trying to figure out more precisely and also how we can optimize it— Sam Altman (@sama) December 5, 2022

6