Submitted by gsvclass t3_11ak97p in MachineLearning
cthorrez t1_j9xstlw wrote
Reply to comment by gsvclass in [P] Minds - A JS library to build LLM powered backends and workflows (OpenAI & Cohere) by gsvclass
People are rushing to deploy LLMs in search, summarization, virtual assistants, question answering and countless other applications where correct answers are expected.
The reason they want to get to the latent space close to the answer is because they want the LLM to output the correct answer.
gsvclass OP t1_j9xttgi wrote
While it may seem that way correct answers are always expected but never delivered everything works within a margin of error with humans it's pretty large and not easy to fix. Also "correct" is subjective. LLMs are language models use the knowlede embedded in their wieghts combined with the context provided by the prompt to do their best. The positive thing here is that that the margin of error is actively being reduced withn LLMs and not so with however we did this before.
Viewing a single comment thread. View all comments