Totally agree with this. Something like ChatGPT is overkill for most use cases and comes at a cost of both money (using the API) and latency. Clever prompting and fine-tuning can let you build free, fast models that are tailored towards your specific problem at hand.
modernzen t1_je6xujz wrote
Reply to comment by rshah4 in [D] FOMO on the rapid pace of LLMs by 00001746
Totally agree with this. Something like ChatGPT is overkill for most use cases and comes at a cost of both money (using the API) and latency. Clever prompting and fine-tuning can let you build free, fast models that are tailored towards your specific problem at hand.