Comments

You must log in or register to comment.

bballerkt7 t1_j7hdy06 wrote

No way they will offer it for free like open ai right?

16

mugbrushteeth t1_j7hez3r wrote

Seems like Google is really nervous and desperate it's losing against OpenAI

29

st8ic t1_j7hf8yv wrote

given the volume of false information that chatGPT generates, I'm surprised that Google is jumping right in with a Google-branded product. They must be really scared of what chatGPT might do to search.

191

new_name_who_dis_ t1_j7hh479 wrote

Well obviously. Search is a tool for information retrieval (mostly). If you have an oracle, it's much more convenient than digging through the source material and doing the research yourself, even when it is presented to you in most relevant first order, which is the most convenient order and what made google successful in the first place.

But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.

91

here_we_go_beep_boop t1_j7hic2w wrote

Hey ChatGPT, please write me a blog post announcing a bunch of new AI things from Google without mentionimg ChatGPT or letting them smell our fear

193

mskogly t1_j7hk0jo wrote

Hm, feels a bit desperate. And interesting that he didnt link to any of their projects, nor to the closed Bard beta. For a company that invented page rank, that seems just weird.

5

JackandFred t1_j7hkrwq wrote

I like to tell people Gpt is more like writing an essay for English class or the sat than a research paper for a history class. It cares about grammatical correctness, readability is a better way to put that, that’s how you’re graded in English. It’s not graded on accuracy or truth. For the sat they used to say you can make up quotes for the essay section because they’re grading the writing, not the content. (I realize that’s dated, I don’t think they do an essay anymore)

32

bortlip t1_j7hl3ik wrote

I had chatGPT summarize this:

ChatGPT is eating our lunch. We're announcing that we intend to work on something real soon in an attempt to look proactive and not fall behind.

39

yeluapyeroc t1_j7hlb5v wrote

Its a trivial configuration option to prevent OpenAI models from hallucinating answers and have them respond with an "I don't know" equivalent. I'm sure Google sees way beyond the novelty of the current publicly accessible ChatGPT model.

−9

telebierro t1_j7hovot wrote

Funny how often he had to mention that they've been working on AI for years and how they used to be the pioneers. Like a hipster crying for props.

29

Sirisian t1_j7hy8td wrote

Google already has a knowledge graph which can be used to guard against common mistakes ChatGPT makes with trivia and basic information. Using such a system it's possible to prevent faults in the model and potentially stop some hallucination that can occur.

I've been hoping to see one of these companies construct and reference a complete probabilistic temporal knowledge graph. The bigger topic is being able to go from entity relationships back to training data sources to examine potential faults. I digress, this is a large topic, but it's something I've been very interested in seeing, especially since information can have a complex history with a lot of relationships. (Not just for our real timeline either. Every book has its own timeline of changing information that such a system should be able to unravel).

41

farmingvillein t1_j7i567e wrote

This is an interesting choice--on the one hand, understandable, on the other, if it looks worse than chatgpt, they are going to get pretty slammed in the press.

Maaaybe they don't immediately care, in that what they are trying to do is head off Microsoft offering something really slick/compelling in Bing. Presumably, then, this is a gamble that Microsoft won't invest in incorporating a "full" chatgpt in their search.

8

memberjan6 t1_j7i5be2 wrote

Google should make available its AlphaFoo family of models. It's the ultimate game player, as in competitive games broadly defined, which would include court trials, purchase bidding, Negotiations, and war games, but yes, entertainment games too. It would totally complement the generative talk models. They solve different problems amazingly well, but combined, well..... Dominance

1

starstruckmon t1_j7i5qoc wrote

It's not just better, wrong information from these models is pretty rare, unless the source it is retrieving from is also false. The LM basically just acts as a summary tool.

I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.

2

datasciencepro t1_j7i6msl wrote

They already had this up their sleeve having basically driven research in LLMs and having the largest dataset in the world. It's not a haphazard jumping in, more of a "okay we're starting to see some activity and commercial application in this space, now it's time to show what we've been working on". As a monopoly in search it would not have made sense for Google to move first.

27

chogall t1_j7i9i4b wrote

> seriously threatened by a 10-50M(?) investment.

That's an over exaggeration and simplification of the ads market; large advertisers do not just move and reallocate their ad budget like Elon Musk firing employees.

1

CrypticSplicer t1_j7i9kap wrote

I'm quite certain Google and Meta are ahead of OpenAI, but they have significantly more to lose by making models publicly available that may potentially make things up or say something offensive. On top of which, this chat search experience seems like something Google would be pretty careful with considering how frequently they've been sued because they somehow reduced page traffic to random websites.

75

Fit-Meet1359 t1_j7iaw8u wrote

Given that this was announced only minutes before Microsoft announced the event tomorrow where they're expected to unveil the new GPT-powered Bing, they are probably scared of that rather than ChatGPT. I know Bing is a joke right now, but if it suddenly becomes a far better information assistant than Google simply by virtue of its ability to chat about search results and keep the context, that poses a huge threat (if the new Bing goes viral like ChatGPT did).

But it doesn't sound like Bard is going to be linked to the Google search engine just yet. The article mentions separate AI search integrations coming soon, but from the screenshots it just seems to generate a paragraph or two about the search, without citations.

23

farmingvillein t1_j7ibgcn wrote

> wrong information from these models is pretty rare

This is not born at out all by the literature. What are you basing this on?

There are still significant problems--everything from source material being ambiguous ("President Obama today said", "President Trump today said"--who is the U.S. President?) to problems that require chains of logic happily hallucinating due to one part of the logic chain breaking down.

Retrieval models are conceptually very cool, and seem very promising, but statements like "pretty rare" and "don't have that issue" are nonsense--at least on the basis of published SOTA.

Statements like

> I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.

are fine--but this is a qualitative value judgment, not something grounded in current published SOTA.

Obviously, if you are sitting at Google Brain and privy to next-gen unpublished solutions, of course my hat is off to you.

12

HoneyChilliPotato7 t1_j7ido0d wrote

Honestly I don't even believe the websites anymore. Today I was searching for good sports bar in my city and couldn't find any reddit threads. I decided to give Google search a try but I didn't want to believe the information is true. It felt like the local bars are paying the websites to boost their rankings.

9

VelveteenAmbush t1_j7igaj9 wrote

They should be scared of both. OpenAI is capable of scaling ChatGPT and packaging a good consumer app themselves. Bing gets them faster distribution but it isn't like OpenAI is a paper tiger. Google wouldn't be able to compete with either of them in the long term if it continued to refuse to ship its own LLMs.

2

ginger_beer_m t1_j7ignoi wrote

> But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.

Most people honestly don't care. They just want to get an answer quick, whether it's made up or not. This is true whether in real life or online.

11

jlaw54 t1_j7iky3o wrote

Yeah, if google wants to be competitive here they have to offer something just as good or better. A half solution won’t convert. Consumers are too smart for that in this space (overall).

1

-Rizhiy- t1_j7ilv1v wrote

I feel that they won't be trying to generate novel responses from the model, but rather take knowledge graph + relevant data from the first few responses and ask the model to summarise that/change into an answer which humans find appealing.

That way you don't have to rely on the model to remember stuff, it can access all required information through attention.

14

thiseye t1_j7imzm0 wrote

I don't think Google will release something similar publicly for free until it's relatively solid. OpenAI isn't hurt by the dumb things ChatGPT says. Google has a brand to protect and will be held to a higher standard.

Also ChatGPT won't be free for long

10

chiaboy t1_j7ivw24 wrote

Most of these “indications” are poorly sourced commentary, out of context internal docs, and absolute (or convient) ignorance re the space, it’s history, and Google’s work therein.

Go back and look at the articles. Very little actual indications Google is “scrambling” they’ve been thinking deeply about this space for longer than most folks have heard about it.

Among many other related asides, there aren’t many global (or even US) comprehensive AI rules. However Google has issued white papers and has lobby heavily for thoughtful regulation. Google not recklessly following the current AI-hype train doesn’t read to me that they were caught flat footed. Anything but.

But the headlines are catchy

22

jlaw54 t1_j7j1k33 wrote

I agree with threads of what you are saying here.

That said, I think they were “prepared” for this in a very theoretical and abstract sense. I don’t think they were running around like fools at google hq aimlessly.

But that doesn’t mean it didn’t inherently create a shock to their system in real terms. Both can have some truth. Humans trend towards black and white absolutes, when the ground truth is most often grey.

1

chiaboy t1_j7j2bwp wrote

I agree.

They weren’t shocked per se, however clearly OAI is on their radar.

Not entirely unlike during COVID when Xoom taught most Americans about web conferencing. Arguably good for the entire space, but the company in the public imagination probably didn’t deserve all the accolades.

So the question for Google and other responsible AI companies, is how to capitalize on the consumer awareness/adoption, but do it in a way that acknowledges the real constraints (that OAI are less concerned with). MSFT is all ready running into some of those constraints viz the partnership (interesting to see Sataya get over his skis a little. That’s not his usual MO).

4

drooobie t1_j7j5ubo wrote

The voice assistants Google Home / Alexa / Siri are certainly made obsolete by ChatGPT, but I'm not so sure about search. There is definitely a distinction between "find me an answer" and "tell me an answer", so it will be interesting to see the differences between ChatGPT and whatever Google spits out for search.

4

melodyze t1_j7j6h6t wrote

The Lamda paper has some interesting sidelines at the end about training the model to dynamically query a knowledge graph for context at inference time and stitch the result back in, to retrieve ground truth, which may also allow the state change at runtime without requiring constant retraining.

They are better positioned to deal with that problem than chatgpt, as they already maintain what is almost certainly the world's most complete and well maintained knowledge graph.

But yeah, while I doubt they have the confidence they would really want there, I would be pretty shocked if their tool wasn't considerably better at not being wrong on factual claims.

1

geeky_username t1_j7jcl06 wrote

Meta is fairly open with what it's doing. But it seems like their teams are disconnected so there's no coordination.

Google seems to only announce when it's approved or sufficiently polished. Or just never showing to the public.

Apple only releases as part of a product or feature.

5

user4517proton t1_j7jda78 wrote

I'm not surprised. Honestly, Google is caught with their pants down on AI integration. They have focused on backend systems to make their ad revenue more profitable. What Microsoft is doing is adding value to the end user. That is a major shift in people's focus on what AI means to everyone, not just Google.

Microsoft is taking a very visible lead in AI for the masses by integrating ChatGPT with Bing, Microsoft 365, development tools, etc. If ChatGPT provides anything near the level of benefit that Co-Pilot does for developers Google has a very valid concern.

I think Microsoft's approach, focusing on the end user value, will make this event be pivotable for how AI is used. Also keep in mind Microsoft is also releasing the biochat GPT, and I suspect there will be a number of targeted releases in the next weeks or months.

A brave new world...

2

keepthepace t1_j7jgm75 wrote

Google has been the biggest team player when it comes to publish advances in AI. OpenAI has been the worst: AI research paper of big players.

Most of the techs that made ChatGPT possible were published by Google. Worse: OpenAI does not publish the 1% of things that makes ChatGPT unique (though we know enough to have a pretty good idea of what they did).

I'd be whiny in their place as well. The GPT family is not super innovative, they just ran away with an architecture mostly made by Google (Transformers/BERT), stripped it of everything that prevented huge parallelization (which many suspect included things that would allow it to stay "grounded" in reality) and slapped more compute on it.

30

ReasonablyBadass t1_j7jjxzq wrote

The AI wars are heating up rapidly.

The next few years are going to be nuts.

11

Mkbond007 t1_j7jrugj wrote

I just copied your text and put it in. This is the reply:

Google continues to push the boundaries of artificial intelligence and machine learning, introducing a suite of cutting-edge tools and technologies for developers, researchers, and businesses. Here are just a few of the exciting new offerings from the tech giant:

Google AutoML: This innovative tool allows users to train custom machine learning models without any prior coding experience, making the process accessible and straightforward. Google AI Platform: This cloud-based platform provides a comprehensive suite of tools and resources for building, deploying, and managing machine learning models. Google AI Research: The company's research division continues to produce ground-breaking work in a wide range of areas, from computer vision and natural language processing to robotics and beyond. Google AI for Social Good: Google's philanthropic arm is using AI to tackle some of the world's biggest challenges, including climate change, healthcare, and poverty. Google AI Language Models: The company is also making significant investments in language modeling, developing powerful tools that can understand and generate human-like text. These are just a few of the many exciting new offerings from Google in the field of artificial intelligence. Whether you're a developer, researcher, or business looking to leverage the power of machine learning, there's never been a better time to get started with Google AI. So why wait? Get started today and start building the future!

36

astrange t1_j7jtrfh wrote

ChatGPT's a website and any website can show you ads. Of course, it has the same issue as Gmail where users aren't going to like ads being targeted based on what they say to it.

0

backafterdeleting t1_j7ju972 wrote

The problem with ChatGPT right now is that it has no way of expressing its confidence level with regard to its own output. So if its unsure about a possible response, it still has to write it as if its 100% undeniable fact.

1

astrange t1_j7juabz wrote

No they're not. ChatGPT doesn't do anything, it just responds to you. Letting it reliably do things (or even reliably return true responses) can't even clearly use the same technology.

3

karthick892 t1_j7juo51 wrote

Is there any bot that would summarise the link?

1

worriedshuffle t1_j7jvjlu wrote

For the GRE our teacher said one of the easiest ways to get a high score was to have a strong ideology. Just be a Nazi, he said.

I did not end up using that advice but maybe if I did I would’ve done even better.

3

ddavidovic t1_j7jwwc1 wrote

I think there's a lot more work to be done on that front. I tried to use ChatGPT and perplexity.ai instead of Google Search. It works for common knowledge, but once you get into more complex and niche queries it just falls apart. They're both very happy to lie to you and make up stuff, which is a huge time waste when you're trying to get work done.

2

maizeq t1_j7jwzai wrote

I can understand their (the Meta/Google engineers) frustration when perspectives like yours proliferate everywhere.

Transformers were invented at Google. OpenAI is overwhelmingly a net consumer of AI research, and incredibly closed off on the few innovations they have actually made. There is a graph somewhere for research output of the various research labs that shows that despite OpenAI 300-400 or so employees, their publicly released open access research is a ridiculously tiny fraction of that of other research labs. Consider the damage this might do if their success convinces management at other tech labs to be more closed off with their AI research, further concentrating the ownership of AI into the hands of a single, or select few corporations. In this sense OpenAI is actively harming the democratisation of AI, which given the previously unseen productivity generating effects AI will have seems like a dangerous place to be in.

10

artsybashev t1_j7k04qr wrote

If Xi Jing Ping, Putin and Trump have taught you anything, being correct is absolutely useless. Just having some sort of a plan, coming up with a good story and some fact sounding arguments is a lot more valuable that what the average person thinks. Nothing more is required to be one of the the most influential person alive.

8

red75prime t1_j7k7hh0 wrote

I've run it thru GPT for your reading pleasure: "I like to tell people that GPT-3 is more like writing an essay for English class (or the SAT) than a research paper for a history class. It cares about grammatical correctness -- in other words, readability -- rather than accuracy or truth. For the SAT, they used to say "you can make up quotes", because they're grading your writing, not your content."

1

bartturner t1_j7k88ul wrote

> OpenAI is overwhelmingly a net consumer of AI research

Exactly. Not sure why people do not get this? Google has made many of the major fundamental AI breakthroughs from the last decade+.

So many fundamental things. GANs for example.

2

Mescallan t1_j7k8aot wrote

tbh I don't think we are going to get much out of Meta until they get close to a holodeck VR experience, or a mainstream-ready AR experience. I'm sure they could drop a chatbot in the next six months, but being able to compete with google/microsoft is going to be hard.

Apple is going to update siri in two years with an LLM and act like they are the saviors of the universe

Amazon is someone that I see get left out of this a lot. They have the resources and funding to make Alexa a search/chat bot as well, and it's right up their ally.

1

bartturner t1_j7k8fnb wrote

Geeze. What a bunch of nonsense. ChatGPT would NOT even be possible without Google.

Google has made most of the major AI fundemental breakthroughs in the last decade+. Google leads in every layer of the AI stack without exception.

A big one is silicon. They started 8 years ago and now on their fifth generation. Their fourth was settting all kinds of records.

https://blog.bitvore.com/googles-tpu-pods-are-breaking-benchmark-records

3

Mescallan t1_j7k8i30 wrote

chatGPT isn't actually free right now, everyone just gets $18 of credits, which is far more than what anyone would actually use in chatGPT, but if you are fine tuning or analyzing bigger data sets you can burn through it pretty quick

1

Nhabls t1_j7kaa5a wrote

ChatGPT hasn't really "shipped" either. It's out free because they feel hemorrhaging millions per month is an okay cost for the research and PR they're getting out of it. it's not viable in the slightest

5

Ill-Poet-3298 t1_j7kap8n wrote

Google is afraid to kill their ad business, so they're letting others pass them by. Classic business mistake. There are apparently a lot of Google stans going around telling everyone how Google invented AI, etc, but it really looks like they got caught flat footed on this one.

0

emerging-tech-reader t1_j7kh681 wrote

> given the volume of false information that chatGPT generates

It actually generates mostly accurate information. The longer you have the conversation the more it starts to hallucinate, but it is considerably more accurate than most people.

−2

harharveryfunny t1_j7kjohr wrote

I tried perplexity.ai for first time yesterday, and was impressed by it. While it uses GPT 3.5 it's not exactly comparable to ChatGPT since it's really an integration of Bing search with GPT 3.5, as you can tell by asking it about current events (and also by asking it about itself!). I'm not sure exactly how they've done the integration, but the gist of it seems to be more that GPT/chat is being used as an interface to search, rather than ChatGPT where the content itself is being generated by GPT.

Microsoft seem to be following a similar approach per the Bing/Chat verson that popped up and disappeared a couple of days ago. It was able to cite sources, which isn't possible for GPT-generated content which has no source as such.

2

chief167 t1_j7kkx9g wrote

It's smart by Google to wait until Microsoft burns the 10 billion, then easily surpass it.

The hype is so painful at the moment, non technical people and sales idiots are way overselling chatgpt.

9

harharveryfunny t1_j7kmbzr wrote

OpenAI just got a second round $10B investment from Microsoft, so that goes a ways ... They are selling API access to GPT for other companies to use however they like, and Microsoft has integrated Copilot (also GPT-based, fine-tuned for code generation) into their dev tools, and MIcrosoft is also integrating OpenAI's LLM tech into Bing. While OpenAI are also selling access to ChatGPT to end users, I doubt that's going to really be a focus for them or major source of revenue.

1

harharveryfunny t1_j7knqfa wrote

OpenAI trained GPT on Microsoft Azure - it has zero to do with Google's TPU. While the "Attention Is All You Need" paper did come out of Google, it just built on models//concepts that came before. OpenAI have proven themselves plenty capable of innovating.

3

emerging-tech-reader t1_j7kptn9 wrote

I got a demo of some of the stuff happening.

The one that is most impressive is they have GPT watching a meeting taking minutes and even crafts action items, emails, etc all ready for you when you leave the meeting.

It will also offer suggestions to follow up on in the meetings as they are on going.

Google have become the altavista.

2

marr75 t1_j7ksi6o wrote

They should be. I think LLMs will totally upset how content is indexed and accessed. It's one of the easiest and lowest stakes use cases for them, really.

Unfortunately, Google has such a huge incumbent advantage that they could produce the 5th or 6th best search specialized LLM and still be the #1 search provider.

1

emerging-tech-reader t1_j7ksup6 wrote

> OpenAI is built on google research

To my knowledge that is not remotely true. Can you cite where you got that claim?

OpenAI does take funding and share research with a number of AI related companies. Don't know if Google is in that list.

2

bartturner t1_j7l64gq wrote

> OpenAI trained GPT on Microsoft Azure - it has zero to do with Google's TPU.

Geeze. ChatGPT would NOT exist if not for Google because the underlying tech was invented by Google.

OpenAI uses other people's stuff instead of inventing things themselves like Google.

Many of the big AI breakthroughs from the last decade+ have come from Google. GANs is another perfect example.

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)

The TPUs are key in being able to bring a large language model to market at scale. Not training but the inference aspect.

−1

yaosio t1_j7lnkh9 wrote

If you look at what you.com does they cite the claims their bot makes by linking to the pages the data come from, but only sometimes. When it doesn't cite something you can be sure that it's just making it up. In the supposed Bing leak it was doing the same thing, citing it's sources.

If they can force it to always provide a source, and if it can't then it won't say it, that could fix it. However, there's still the problem that the model doesn't know what's true and what's false. Just because it can cite a source doesn't mean the source is correct. This is not something that the model can learn by being told. To learn by being told assumes that it's data is correct, which can't be assumed. A researcher could tell the model, "all cats are ugly", which is obviously not true, but the model will say all cats are ugly because it was taught that. Models will need to have a way to determine on their own what is true and what isn't true, and explain it's reasoning.

1

harharveryfunny t1_j7lu67f wrote

What underlying are you talking about? Are you even familiar with the "Attention" paper and it's relevance here? Maybe you think OpenAI use Google's Tensorflow? They don't.

GANs were invented by Ian Goodfellow while he was a student at. U.Montreal, before he ever joined Google.

No - TPUs are not key to deploying at scale unless you are targeting Google cloud. Google is a distant 3rd in cloud marketshare behind Microsoft and Amazon. OpenAI of course deploy on Microsoft Azure, not Google.

2

bartturner t1_j7lugv5 wrote

Geeze. Who do you think invented Transformers?

https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)

NO!!! GANs were invented by Ian while he was working at Google. It is a pretty interesting story.

The vast majority of the major AI breakthroughs from the last decade+ came from Google.

OpenAI really does NOT do R&D. THey more use the R&D from others and mostly Google.

−3

bartturner t1_j7lza3y wrote

Go listen to the podcast and Ian explains it all. Plus no Schmidhuber was NOT the inventor. It was Ian.

Go listen to the podcast and get back to me.

The key AI R&D from the last decade plus has all come from Google. Not from OpenAI and most definitely not from Microsoft.

1

hemphock t1_j7mtsvp wrote

yeah it's been like that for years. idk reddit is just a well moderated website with lots of small communities around a lot of topics. i think the lifecycle of its communities is the secret sauce. communities will peak and then get crappy (pretty reliably imo) but you can just leave and join new ones.

i dont think the 70% is a good sample though. its a poll of user responses to androidauthority.com

1

TheEdes t1_j7mysgv wrote

The other day I (mobile) searched for something related to meme stocks and the pills under the search bar showed the News followed by a button that said (+ Reddit), I clicked it and it literally just added reddit to my search term.

1

astrange t1_j7oduw3 wrote

This is wishful thinking. ChatGPT, being a computer program, doesn't have features it's not designed to have, and it's not designed to have this one.

(By designed, I mean has engineering and regression testing so you can trust it'll work tomorrow when they redo the model.)

I agree a fine tuned LLM can be a large part of it, but virtual assistants already have LMs and obviously don't always work that well.

2

crazymonezyy t1_j7ojv39 wrote

> But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.

The general public is not reading this sub, and ChatGPT is being sold to them by marketing and sales hacks without this disclaimer. We're way past the point of PSAs.

1

danielbln t1_j7ovvql wrote

What we all want is that Alexa/Siri/Home have modern LLM conversational features, on addition to reliably turn on/off our lights or give us the weather. Ever since ChatGPT came out, interacting with a home assistance feels even more like pulling nails than it used to.

1