Viewing a single comment thread. View all comments

treedmt t1_j29yope wrote

LUCI is also built on a fine tuned gpt3.5 model, so pretty close to chatgpt in terms of capabilities.

They have a very different monetisation model afaik. They are tokenising the promise of future revenue to monetise, instead of charging customers up front.

> if the training data is worth less than the inference cost.

The thesis is that training data could be worth much more than inference cost, if it is high quality, unique, and targeted to one format (eg. problem:solution or question:answer)

In fact, I believe they’re rolling out “ask-to-earn” very shortly, which will reward users for asking high quality questions and rating the answers, in Luci credits. The focus appears to be solely on accumulating a massive high quality QA database, which will have far more value in the future.

I’m not aware of any rate limits yet but naturally they may be applied to prevent spam etc., however keeping the base model free is core to their data collection strategy.

2

theRIAA t1_j2ejmws wrote

> so pretty close to chatgpt in terms of capabilities

I was impressed that it could give me generic working one-liners, but that is quite far off from writing a working program with 100+ lines of code in all major languages, like ChatGPT can (effortlessly) do. But thank you for the link, it's still very useful.

1