Submitted by EducationalCicada t3_10vgrff in MachineLearning
farmingvillein t1_j7i4iiu wrote
Reply to comment by starstruckmon in [N] Google: An Important Next Step On Our AI Journey by EducationalCicada
> Retrieval augmented models ( whether via architecture or prompt ) don't have that issue.
Err. Yes they do.
They are generally better, but this is far from a solved problem.
starstruckmon t1_j7i5qoc wrote
It's not just better, wrong information from these models is pretty rare, unless the source it is retrieving from is also false. The LM basically just acts as a summary tool.
I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.
farmingvillein t1_j7ibgcn wrote
> wrong information from these models is pretty rare
This is not born at out all by the literature. What are you basing this on?
There are still significant problems--everything from source material being ambiguous ("President Obama today said", "President Trump today said"--who is the U.S. President?) to problems that require chains of logic happily hallucinating due to one part of the logic chain breaking down.
Retrieval models are conceptually very cool, and seem very promising, but statements like "pretty rare" and "don't have that issue" are nonsense--at least on the basis of published SOTA.
Statements like
> I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.
are fine--but this is a qualitative value judgment, not something grounded in current published SOTA.
Obviously, if you are sitting at Google Brain and privy to next-gen unpublished solutions, of course my hat is off to you.
starstruckmon t1_j7ie3ad wrote
Fair enough. I was speaking from a practical perspective, considering the types of questions that people typically ask search engines, not benchmarks.
RobbinDeBank t1_j7ky0ju wrote
Nice try. What are you hiding at Google Brain?
[deleted] t1_j7kl3ux wrote
[removed]
Viewing a single comment thread. View all comments