TFenrir t1_j9glyon wrote
First, Google has the best language models we know about if we look at benchmarks results, with it's PaLM model.
Second, Google has a much higher standard for what they have been willing to release (which seems to be changing because of the competition).
Third, DeepMind will be releasing their own LLM (Sparrow) - which will most likely be quite capable, as well as accurate.
Fourth, Google will be releasing LaMDA (which powers Bard) soon, and there's no data that shows it's any less proficient than any other model out there, although there are rumours that the smaller model behind Bard might be not competitive enough to impress, although it would be cheap enough to scale for more users.
Fifth, it's important to remember that both ChatGPT and Sydney make numerous mistakes, they are just in a position where they are much less scrutinized for those mistakes
GoldenRain t1_j9k8se5 wrote
I think you missed a point, the most important point. Each prompt costs gpt a few cents.
It would be way too expensive to have something like that at the scale of google search.
They have to make something that is far, far cheaper.
GPT-5entient t1_j9hhlz5 wrote
Exactly this, Google has several very powerful LLMs in play. Looking at the slow motion disaster that is the Bing chat it looks to me that it was perhaps prudent for them to wait a bit instead of rushing.
Berke80 OP t1_j9is9c5 wrote
Some very good and satisfactory answers here. I’m thankful.
Viewing a single comment thread. View all comments