Submitted by Savings-Juice-9517 t3_124z1rt in singularity
Gab1024 t1_je1njhv wrote
I like it, but would have been better if it was powered by GPT-4 and not GPT-3.5
maskedpaki t1_je23g0c wrote
we will get a gpt4 turbo sometime this year I think and 3.5 will be dead.
ArthurParkerhouse t1_je2y9t1 wrote
Unless the API cost for 3.5 is even lower than what it currently is.
maskedpaki t1_je2yji8 wrote
I dunno about that. I think at some quality starts to matter more than cost
I mean you can run gpt2 for free on your own gpu but no one cares to.
ArthurParkerhouse t1_je2z9te wrote
Depends on what's needed I suppose. For a general chat-bot it seems to work fine, plus we'll get the opportunity to fine-tune 3.5-Turbo models soon which will be enticing at that lower price point.
PolPotLover t1_je265vf wrote
really Sir? have you tried? how do you compare GPT-3.5 with GPT-4? i’m using it for a while now and seeing great results.
apinanaivot t1_je24cio wrote
What makes you think it isn't?
iJeff t1_je2n91k wrote
It definitely isn't. Not only is the icon green like with GPT-3.5, it replies exactly like GPT-3.5 to certain narrative-based commands.
apinanaivot t1_je4d58k wrote
Weird, GPT-4's biggest strength was said to be it's ability to use external tools proficiently unlike GPT-3.5.
Beowuwlf t1_je4p2yu wrote
3.5 was using tools (LangChain) for months before plugins were announced
apinanaivot t1_je50pdk wrote
Doesn't change what I said.
Beowuwlf t1_je79bmd wrote
Gotcha, I misunderstood where you were going with that.
iJeff t1_je66e93 wrote
I suspect they just don't have enough capacity yet. As it is, I find the responses a bit slow whenever it needs to access the internet. Combine that with the slow GPT-4 response times largely due to congestion and 25 messages per 3 hours limit and it wouldn't be a great experience.
Viewing a single comment thread. View all comments