Comments
mugbrushteeth t1_j7hez3r wrote
Seems like Google is really nervous and desperate it's losing against OpenAI
st8ic t1_j7hf8yv wrote
given the volume of false information that chatGPT generates, I'm surprised that Google is jumping right in with a Google-branded product. They must be really scared of what chatGPT might do to search.
visarga t1_j7hgmc3 wrote
It's not their large model, it's a toy model. Expect lower quality.
> This much smaller model requires significantly less computing power, enabling us to scale to more users
farmingvillein t1_j7hgqs3 wrote
Really more about bing...which is a statement which seems kinda crazy to write...
trendafili t1_j7hgxbu wrote
They offer everything else for free
[deleted] t1_j7hh25u wrote
[deleted]
JustOneAvailableName t1_j7hh2yv wrote
Their main source of revenue is seriously threatened by a 10-50M(?) investment. It might not be OpenAI, but something will replace Google in the coming years if Google doesn't innovate their search.
new_name_who_dis_ t1_j7hh479 wrote
Well obviously. Search is a tool for information retrieval (mostly). If you have an oracle, it's much more convenient than digging through the source material and doing the research yourself, even when it is presented to you in most relevant first order, which is the most convenient order and what made google successful in the first place.
But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.
bballerkt7 t1_j7hh7ri wrote
Yeah now that I think about they’ll probably have free access that is limited and a subscription plan for more features like google colab
[deleted] t1_j7hh9bl wrote
[removed]
pryoslice t1_j7hhykn wrote
I think they will. Their goal is to drive traffic.
here_we_go_beep_boop t1_j7hic2w wrote
Hey ChatGPT, please write me a blog post announcing a bunch of new AI things from Google without mentionimg ChatGPT or letting them smell our fear
Zyansheep t1_j7hjddh wrote
Google search responses may be made up as well, its just a matter of there being more than one source to go through which makes it easier to spot potential discrepancies in any one source ;)
HoneyChilliPotato7 t1_j7hjnif wrote
True, I don't remember the last time I used Google search without adding reddit at the end
HoneyChilliPotato7 t1_j7hjq1u wrote
mskogly t1_j7hk0jo wrote
Hm, feels a bit desperate. And interesting that he didnt link to any of their projects, nor to the closed Bard beta. For a company that invented page rank, that seems just weird.
JackandFred t1_j7hkrwq wrote
I like to tell people Gpt is more like writing an essay for English class or the sat than a research paper for a history class. It cares about grammatical correctness, readability is a better way to put that, that’s how you’re graded in English. It’s not graded on accuracy or truth. For the sat they used to say you can make up quotes for the essay section because they’re grading the writing, not the content. (I realize that’s dated, I don’t think they do an essay anymore)
bortlip t1_j7hl3ik wrote
I had chatGPT summarize this:
ChatGPT is eating our lunch. We're announcing that we intend to work on something real soon in an attempt to look proactive and not fall behind.
yeluapyeroc t1_j7hlb5v wrote
Its a trivial configuration option to prevent OpenAI models from hallucinating answers and have them respond with an "I don't know" equivalent. I'm sure Google sees way beyond the novelty of the current publicly accessible ChatGPT model.
yeluapyeroc t1_j7hliu2 wrote
They absolutely will include the light version into their search results for free. I doubt the model training tools for developers will be free, though.
stml t1_j7hlueg wrote
It's not like Google vets the websites that show up in Google searches all that well regardless.
VelveteenAmbush t1_j7hn77a wrote
OpenAI is powering Bing's forthcoming AI features
ktpr t1_j7ho1vo wrote
They don’t care that much about what ChatGPT will do search. They care about the advertising users of ChatGPT won’t be seeing.
telebierro t1_j7hovot wrote
Funny how often he had to mention that they've been working on AI for years and how they used to be the pioneers. Like a hipster crying for props.
here_we_go_beep_boop t1_j7hpycy wrote
Yep, the entire result space is utterly polluted by SEO trash
new_name_who_dis_ t1_j7hpz3q wrote
Well if you see variety in the top results in google that might give you pause. But you're not getting that from ChatGPT
st8ic t1_j7hquaz wrote
> Its a trivial configuration option to prevent OpenAI models from hallucinating answers and have them respond with an "I don't know" equivalent.
How?
mettle t1_j7hrck0 wrote
is it though? how would you even do that? i think if you have that actually figured out, it's easily a $1b idea.
mirrorcoloured t1_j7hum3j wrote
I think this says more about you than Google.
djbange t1_j7hwpjb wrote
Google is only getting out in front of Microsoft, who apparently has an announcement regarding Bing and chatGPT scheduled for tomorrow.
Sirisian t1_j7hy8td wrote
Google already has a knowledge graph which can be used to guard against common mistakes ChatGPT makes with trivia and basic information. Using such a system it's possible to prevent faults in the model and potentially stop some hallucination that can occur.
I've been hoping to see one of these companies construct and reference a complete probabilistic temporal knowledge graph. The bigger topic is being able to go from entity relationships back to training data sources to examine potential faults. I digress, this is a large topic, but it's something I've been very interested in seeing, especially since information can have a complex history with a lot of relationships. (Not just for our real timeline either. Every book has its own timeline of changing information that such a system should be able to unravel).
krzme t1_j7i0to8 wrote
Given the volume of false information that Google gives hints to…
[deleted] t1_j7i15sy wrote
[deleted]
[deleted] t1_j7i1gq8 wrote
[deleted]
farmingvillein t1_j7i2r6c wrote
Of course--but it isn't openai, per se, that they are scared of, it is the bing distribution platform.
starstruckmon t1_j7i34u8 wrote
Retrieval augmented models ( whether via architecture or prompt ) don't have that issue.
Even GPT3 API based services like perplexity.ai that retrieval augment using just the prompt don't spew wrong information all that much.
farmingvillein t1_j7i4ed7 wrote
> how would you even do that?
r/yeluapyeroc just reviews each post, np
farmingvillein t1_j7i4iiu wrote
> Retrieval augmented models ( whether via architecture or prompt ) don't have that issue.
Err. Yes they do.
They are generally better, but this is far from a solved problem.
hemphock t1_j7i4tqx wrote
yeah, but its not just them. https://www.androidauthority.com/reddit-web-search-queries-poll-results-3119551/
farmingvillein t1_j7i567e wrote
This is an interesting choice--on the one hand, understandable, on the other, if it looks worse than chatgpt, they are going to get pretty slammed in the press.
Maaaybe they don't immediately care, in that what they are trying to do is head off Microsoft offering something really slick/compelling in Bing. Presumably, then, this is a gamble that Microsoft won't invest in incorporating a "full" chatgpt in their search.
memberjan6 t1_j7i5be2 wrote
Google should make available its AlphaFoo family of models. It's the ultimate game player, as in competitive games broadly defined, which would include court trials, purchase bidding, Negotiations, and war games, but yes, entertainment games too. It would totally complement the generative talk models. They solve different problems amazingly well, but combined, well..... Dominance
roselan t1_j7i5cig wrote
Hide your damsels.
mettle t1_j7i5ign wrote
the true human in the loop.
starstruckmon t1_j7i5qoc wrote
It's not just better, wrong information from these models is pretty rare, unless the source it is retrieving from is also false. The LM basically just acts as a summary tool.
I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.
datasciencepro t1_j7i6msl wrote
They already had this up their sleeve having basically driven research in LLMs and having the largest dataset in the world. It's not a haphazard jumping in, more of a "okay we're starting to see some activity and commercial application in this space, now it's time to show what we've been working on". As a monopoly in search it would not have made sense for Google to move first.
chogall t1_j7i9i4b wrote
> seriously threatened by a 10-50M(?) investment.
That's an over exaggeration and simplification of the ads market; large advertisers do not just move and reallocate their ad budget like Elon Musk firing employees.
CrypticSplicer t1_j7i9kap wrote
I'm quite certain Google and Meta are ahead of OpenAI, but they have significantly more to lose by making models publicly available that may potentially make things up or say something offensive. On top of which, this chat search experience seems like something Google would be pretty careful with considering how frequently they've been sued because they somehow reduced page traffic to random websites.
[deleted] t1_j7ia76m wrote
[deleted]
Fit-Meet1359 t1_j7iaw8u wrote
Given that this was announced only minutes before Microsoft announced the event tomorrow where they're expected to unveil the new GPT-powered Bing, they are probably scared of that rather than ChatGPT. I know Bing is a joke right now, but if it suddenly becomes a far better information assistant than Google simply by virtue of its ability to chat about search results and keep the context, that poses a huge threat (if the new Bing goes viral like ChatGPT did).
But it doesn't sound like Bard is going to be linked to the Google search engine just yet. The article mentions separate AI search integrations coming soon, but from the screenshots it just seems to generate a paragraph or two about the search, without citations.
farmingvillein t1_j7ibgcn wrote
> wrong information from these models is pretty rare
This is not born at out all by the literature. What are you basing this on?
There are still significant problems--everything from source material being ambiguous ("President Obama today said", "President Trump today said"--who is the U.S. President?) to problems that require chains of logic happily hallucinating due to one part of the logic chain breaking down.
Retrieval models are conceptually very cool, and seem very promising, but statements like "pretty rare" and "don't have that issue" are nonsense--at least on the basis of published SOTA.
Statements like
> I don't think it needs to be 100% resolved for it to be a viable replacement for a search engine.
are fine--but this is a qualitative value judgment, not something grounded in current published SOTA.
Obviously, if you are sitting at Google Brain and privy to next-gen unpublished solutions, of course my hat is off to you.
[deleted] t1_j7ic7vi wrote
[deleted]
clueless1245 t1_j7id4sr wrote
Lol what? That's the exact rationale "Open"AI used for not releasing the model weights for Dalle-2 (and instead selling it to Microsoft).
HoneyChilliPotato7 t1_j7ido0d wrote
Honestly I don't even believe the websites anymore. Today I was searching for good sports bar in my city and couldn't find any reddit threads. I decided to give Google search a try but I didn't want to believe the information is true. It felt like the local bars are paying the websites to boost their rankings.
HoneyChilliPotato7 t1_j7ids79 wrote
[deleted] t1_j7idu3q wrote
[deleted]
starstruckmon t1_j7ie3ad wrote
Fair enough. I was speaking from a practical perspective, considering the types of questions that people typically ask search engines, not benchmarks.
VelveteenAmbush t1_j7igaj9 wrote
They should be scared of both. OpenAI is capable of scaling ChatGPT and packaging a good consumer app themselves. Bing gets them faster distribution but it isn't like OpenAI is a paper tiger. Google wouldn't be able to compete with either of them in the long term if it continued to refuse to ship its own LLMs.
ginger_beer_m t1_j7ignoi wrote
> But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.
Most people honestly don't care. They just want to get an answer quick, whether it's made up or not. This is true whether in real life or online.
HurricaneHenry t1_j7ihhyl wrote
It’ll be baked into their search engine, which is free.
jlaw54 t1_j7ihoki wrote
There are indications there has been some scrambling at google over this. But that they weren’t armed and researched, but they didn’t see this coming the way it did.
taleofbenji t1_j7ihwkj wrote
OpenAI uses tech pioneered by Google.
They didn't come out of nowhere.
clueless1245 t1_j7ijokz wrote
Nope, they're exactly the same as far as advancing human knowledge goes.
reditum t1_j7ikqi1 wrote
No, thanks
jlaw54 t1_j7iky3o wrote
Yeah, if google wants to be competitive here they have to offer something just as good or better. A half solution won’t convert. Consumers are too smart for that in this space (overall).
jlaw54 t1_j7il0ih wrote
Toss a coin…..
reditum t1_j7ilt3y wrote
You just pay with unlimited access to your soul data
-Rizhiy- t1_j7ilv1v wrote
I feel that they won't be trying to generate novel responses from the model, but rather take knowledge graph + relevant data from the first few responses and ask the model to summarise that/change into an answer which humans find appealing.
That way you don't have to rely on the model to remember stuff, it can access all required information through attention.
thiseye t1_j7imzm0 wrote
I don't think Google will release something similar publicly for free until it's relatively solid. OpenAI isn't hurt by the dumb things ChatGPT says. Google has a brand to protect and will be held to a higher standard.
Also ChatGPT won't be free for long
_ModeM t1_j7it414 wrote
Haha this is great, thanks for sharing.
AlbertaLee0116 t1_j7iv3nj wrote
chatgpt -> bard -> bing+chatgpt
chiaboy t1_j7ivw24 wrote
Most of these “indications” are poorly sourced commentary, out of context internal docs, and absolute (or convient) ignorance re the space, it’s history, and Google’s work therein.
Go back and look at the articles. Very little actual indications Google is “scrambling” they’ve been thinking deeply about this space for longer than most folks have heard about it.
Among many other related asides, there aren’t many global (or even US) comprehensive AI rules. However Google has issued white papers and has lobby heavily for thoughtful regulation. Google not recklessly following the current AI-hype train doesn’t read to me that they were caught flat footed. Anything but.
But the headlines are catchy
joexner t1_j7j17v2 wrote
Like this one
jlaw54 t1_j7j1k33 wrote
I agree with threads of what you are saying here.
That said, I think they were “prepared” for this in a very theoretical and abstract sense. I don’t think they were running around like fools at google hq aimlessly.
But that doesn’t mean it didn’t inherently create a shock to their system in real terms. Both can have some truth. Humans trend towards black and white absolutes, when the ground truth is most often grey.
chiaboy t1_j7j2bwp wrote
I agree.
They weren’t shocked per se, however clearly OAI is on their radar.
Not entirely unlike during COVID when Xoom taught most Americans about web conferencing. Arguably good for the entire space, but the company in the public imagination probably didn’t deserve all the accolades.
So the question for Google and other responsible AI companies, is how to capitalize on the consumer awareness/adoption, but do it in a way that acknowledges the real constraints (that OAI are less concerned with). MSFT is all ready running into some of those constraints viz the partnership (interesting to see Sataya get over his skis a little. That’s not his usual MO).
[deleted] t1_j7j37nu wrote
[deleted]
[deleted] t1_j7j3cto wrote
[deleted]
netkcid t1_j7j4bhc wrote
They're oh so bad at connecting tech to the users too...
Google is about to become HotBot or Ask Jeeves or ...
drooobie t1_j7j5ubo wrote
The voice assistants Google Home / Alexa / Siri are certainly made obsolete by ChatGPT, but I'm not so sure about search. There is definitely a distinction between "find me an answer" and "tell me an answer", so it will be interesting to see the differences between ChatGPT and whatever Google spits out for search.
melodyze t1_j7j6h6t wrote
The Lamda paper has some interesting sidelines at the end about training the model to dynamically query a knowledge graph for context at inference time and stitch the result back in, to retrieve ground truth, which may also allow the state change at runtime without requiring constant retraining.
They are better positioned to deal with that problem than chatgpt, as they already maintain what is almost certainly the world's most complete and well maintained knowledge graph.
But yeah, while I doubt they have the confidence they would really want there, I would be pretty shocked if their tool wasn't considerably better at not being wrong on factual claims.
WokeAssBaller t1_j7j6u6f wrote
I think Google wins this race in the end, seeing ChatGPT be plugged into crappy Microsoft products tells me where it is heading
[deleted] t1_j7jbejn wrote
[deleted]
geeky_username t1_j7jc9iq wrote
"can we see it?"
"... No"
geeky_username t1_j7jcl06 wrote
Meta is fairly open with what it's doing. But it seems like their teams are disconnected so there's no coordination.
Google seems to only announce when it's approved or sufficiently polished. Or just never showing to the public.
Apple only releases as part of a product or feature.
geeky_username t1_j7jcp21 wrote
Maybe Cortana won't be braindead
geeky_username t1_j7jcy6k wrote
Pichai has crippled Google
user4517proton t1_j7jda78 wrote
I'm not surprised. Honestly, Google is caught with their pants down on AI integration. They have focused on backend systems to make their ad revenue more profitable. What Microsoft is doing is adding value to the end user. That is a major shift in people's focus on what AI means to everyone, not just Google.
Microsoft is taking a very visible lead in AI for the masses by integrating ChatGPT with Bing, Microsoft 365, development tools, etc. If ChatGPT provides anything near the level of benefit that Co-Pilot does for developers Google has a very valid concern.
I think Microsoft's approach, focusing on the end user value, will make this event be pivotable for how AI is used. Also keep in mind Microsoft is also releasing the biochat GPT, and I suspect there will be a number of targeted releases in the next weeks or months.
A brave new world...
keepthepace t1_j7jgm75 wrote
Google has been the biggest team player when it comes to publish advances in AI. OpenAI has been the worst: AI research paper of big players.
Most of the techs that made ChatGPT possible were published by Google. Worse: OpenAI does not publish the 1% of things that makes ChatGPT unique (though we know enough to have a pretty good idea of what they did).
I'd be whiny in their place as well. The GPT family is not super innovative, they just ran away with an architecture mostly made by Google (Transformers/BERT), stripped it of everything that prevented huge parallelization (which many suspect included things that would allow it to stay "grounded" in reality) and slapped more compute on it.
wood_orange443 t1_j7jjnfc wrote
How exactly do you think chatgpt is going to get funded?
ReasonablyBadass t1_j7jjwj4 wrote
To our shareholders, oh valley of silicon
[deleted] t1_j7jjwin wrote
[removed]
ReasonablyBadass t1_j7jjxzq wrote
The AI wars are heating up rapidly.
The next few years are going to be nuts.
techie0007 t1_j7jkhoo wrote
[deleted] t1_j7jncxm wrote
[removed]
opticd t1_j7jnou3 wrote
This is probably the most thoughtful take I’ve read in this. People forget how tilted the mainstream media is against big tech.
Mkbond007 t1_j7jrugj wrote
I just copied your text and put it in. This is the reply:
Google continues to push the boundaries of artificial intelligence and machine learning, introducing a suite of cutting-edge tools and technologies for developers, researchers, and businesses. Here are just a few of the exciting new offerings from the tech giant:
Google AutoML: This innovative tool allows users to train custom machine learning models without any prior coding experience, making the process accessible and straightforward. Google AI Platform: This cloud-based platform provides a comprehensive suite of tools and resources for building, deploying, and managing machine learning models. Google AI Research: The company's research division continues to produce ground-breaking work in a wide range of areas, from computer vision and natural language processing to robotics and beyond. Google AI for Social Good: Google's philanthropic arm is using AI to tackle some of the world's biggest challenges, including climate change, healthcare, and poverty. Google AI Language Models: The company is also making significant investments in language modeling, developing powerful tools that can understand and generate human-like text. These are just a few of the many exciting new offerings from Google in the field of artificial intelligence. Whether you're a developer, researcher, or business looking to leverage the power of machine learning, there's never been a better time to get started with Google AI. So why wait? Get started today and start building the future!
here_we_go_beep_boop t1_j7jsla2 wrote
Ha not far off! Some bullet points on Bard in the prompt and you're done
astrange t1_j7jtrfh wrote
ChatGPT's a website and any website can show you ads. Of course, it has the same issue as Gmail where users aren't going to like ads being targeted based on what they say to it.
astrange t1_j7ju8m8 wrote
* with your attention span to look at ads
backafterdeleting t1_j7ju972 wrote
The problem with ChatGPT right now is that it has no way of expressing its confidence level with regard to its own output. So if its unsure about a possible response, it still has to write it as if its 100% undeniable fact.
astrange t1_j7juabz wrote
No they're not. ChatGPT doesn't do anything, it just responds to you. Letting it reliably do things (or even reliably return true responses) can't even clearly use the same technology.
karthick892 t1_j7juo51 wrote
Is there any bot that would summarise the link?
worriedshuffle t1_j7jvjlu wrote
For the GRE our teacher said one of the easiest ways to get a high score was to have a strong ideology. Just be a Nazi, he said.
I did not end up using that advice but maybe if I did I would’ve done even better.
ddavidovic t1_j7jwwc1 wrote
I think there's a lot more work to be done on that front. I tried to use ChatGPT and perplexity.ai instead of Google Search. It works for common knowledge, but once you get into more complex and niche queries it just falls apart. They're both very happy to lie to you and make up stuff, which is a huge time waste when you're trying to get work done.
maizeq t1_j7jwzai wrote
I can understand their (the Meta/Google engineers) frustration when perspectives like yours proliferate everywhere.
Transformers were invented at Google. OpenAI is overwhelmingly a net consumer of AI research, and incredibly closed off on the few innovations they have actually made. There is a graph somewhere for research output of the various research labs that shows that despite OpenAI 300-400 or so employees, their publicly released open access research is a ridiculously tiny fraction of that of other research labs. Consider the damage this might do if their success convinces management at other tech labs to be more closed off with their AI research, further concentrating the ownership of AI into the hands of a single, or select few corporations. In this sense OpenAI is actively harming the democratisation of AI, which given the previously unseen productivity generating effects AI will have seems like a dangerous place to be in.
impermissibility t1_j7jywnd wrote
Uh, I'm sorry the English classes wherever you went to school sucked!
artsybashev t1_j7k04qr wrote
If Xi Jing Ping, Putin and Trump have taught you anything, being correct is absolutely useless. Just having some sort of a plan, coming up with a good story and some fact sounding arguments is a lot more valuable that what the average person thinks. Nothing more is required to be one of the the most influential person alive.
red75prime t1_j7k7hh0 wrote
I've run it thru GPT for your reading pleasure: "I like to tell people that GPT-3 is more like writing an essay for English class (or the SAT) than a research paper for a history class. It cares about grammatical correctness -- in other words, readability -- rather than accuracy or truth. For the SAT, they used to say "you can make up quotes", because they're grading your writing, not your content."
bartturner t1_j7k88ul wrote
> OpenAI is overwhelmingly a net consumer of AI research
Exactly. Not sure why people do not get this? Google has made many of the major fundamental AI breakthroughs from the last decade+.
So many fundamental things. GANs for example.
Mescallan t1_j7k8aot wrote
tbh I don't think we are going to get much out of Meta until they get close to a holodeck VR experience, or a mainstream-ready AR experience. I'm sure they could drop a chatbot in the next six months, but being able to compete with google/microsoft is going to be hard.
Apple is going to update siri in two years with an LLM and act like they are the saviors of the universe
Amazon is someone that I see get left out of this a lot. They have the resources and funding to make Alexa a search/chat bot as well, and it's right up their ally.
bartturner t1_j7k8fnb wrote
Geeze. What a bunch of nonsense. ChatGPT would NOT even be possible without Google.
Google has made most of the major AI fundemental breakthroughs in the last decade+. Google leads in every layer of the AI stack without exception.
A big one is silicon. They started 8 years ago and now on their fifth generation. Their fourth was settting all kinds of records.
https://blog.bitvore.com/googles-tpu-pods-are-breaking-benchmark-records
Mescallan t1_j7k8i30 wrote
chatGPT isn't actually free right now, everyone just gets $18 of credits, which is far more than what anyone would actually use in chatGPT, but if you are fine tuning or analyzing bigger data sets you can burn through it pretty quick
[deleted] t1_j7k8m9i wrote
Nhabls t1_j7ka1k3 wrote
open ai will not offer it for free either
Nhabls t1_j7kaa5a wrote
ChatGPT hasn't really "shipped" either. It's out free because they feel hemorrhaging millions per month is an okay cost for the research and PR they're getting out of it. it's not viable in the slightest
Ill-Poet-3298 t1_j7kap8n wrote
Google is afraid to kill their ad business, so they're letting others pass them by. Classic business mistake. There are apparently a lot of Google stans going around telling everyone how Google invented AI, etc, but it really looks like they got caught flat footed on this one.
Nhabls t1_j7kf73c wrote
emerging-tech-reader t1_j7kh681 wrote
> given the volume of false information that chatGPT generates
It actually generates mostly accurate information. The longer you have the conversation the more it starts to hallucinate, but it is considerably more accurate than most people.
---AI--- t1_j7khq2x wrote
Which tech?
---AI--- t1_j7khxvx wrote
Poorly contained? What do you mean?
---AI--- t1_j7ki2wb wrote
Eh, so like humans
LeftToSketch t1_j7kj2gt wrote
Chat GPT is built on this: https://arxiv.org/abs/1706.03762
harharveryfunny t1_j7kjohr wrote
I tried perplexity.ai for first time yesterday, and was impressed by it. While it uses GPT 3.5 it's not exactly comparable to ChatGPT since it's really an integration of Bing search with GPT 3.5, as you can tell by asking it about current events (and also by asking it about itself!). I'm not sure exactly how they've done the integration, but the gist of it seems to be more that GPT/chat is being used as an interface to search, rather than ChatGPT where the content itself is being generated by GPT.
Microsoft seem to be following a similar approach per the Bing/Chat verson that popped up and disappeared a couple of days ago. It was able to cite sources, which isn't possible for GPT-generated content which has no source as such.
chief167 t1_j7kkx9g wrote
It's smart by Google to wait until Microsoft burns the 10 billion, then easily surpass it.
The hype is so painful at the moment, non technical people and sales idiots are way overselling chatgpt.
chief167 t1_j7kl03z wrote
GPT is largely built on Google research
[deleted] t1_j7kl3ux wrote
[removed]
chief167 t1_j7kl5jd wrote
Yeah OpenAI was founded to be... Well... open.
It's the most closed ai company in existence probably
harharveryfunny t1_j7kmbzr wrote
OpenAI just got a second round $10B investment from Microsoft, so that goes a ways ... They are selling API access to GPT for other companies to use however they like, and Microsoft has integrated Copilot (also GPT-based, fine-tuned for code generation) into their dev tools, and MIcrosoft is also integrating OpenAI's LLM tech into Bing. While OpenAI are also selling access to ChatGPT to end users, I doubt that's going to really be a focus for them or major source of revenue.
harharveryfunny t1_j7knqfa wrote
OpenAI trained GPT on Microsoft Azure - it has zero to do with Google's TPU. While the "Attention Is All You Need" paper did come out of Google, it just built on models//concepts that came before. OpenAI have proven themselves plenty capable of innovating.
marvinv1 t1_j7kpqoj wrote
Yup, OpenAI expects to generate $200 million in revenue for 2023 and $1 billion for next year.
emerging-tech-reader t1_j7kptn9 wrote
I got a demo of some of the stuff happening.
The one that is most impressive is they have GPT watching a meeting taking minutes and even crafts action items, emails, etc all ready for you when you leave the meeting.
It will also offer suggestions to follow up on in the meetings as they are on going.
Google have become the altavista.
WokeAssBaller t1_j7kqhgl wrote
Yeah right, OpenAI is built on google research, and cool you worked a half functioning chat or into the worst messaging and search app, congrats
[deleted] t1_j7kri0a wrote
[removed]
marr75 t1_j7ksi6o wrote
They should be. I think LLMs will totally upset how content is indexed and accessed. It's one of the easiest and lowest stakes use cases for them, really.
Unfortunately, Google has such a huge incumbent advantage that they could produce the 5th or 6th best search specialized LLM and still be the #1 search provider.
emerging-tech-reader t1_j7ksup6 wrote
> OpenAI is built on google research
To my knowledge that is not remotely true. Can you cite where you got that claim?
OpenAI does take funding and share research with a number of AI related companies. Don't know if Google is in that list.
KleinByte t1_j7ktb3x wrote
Competitive gaming would be ruined if this happened.
WokeAssBaller t1_j7ktznh wrote
https://arxiv.org/pdf/1706.03762.pdf the paper that made all this possible.
Google has also been leading in research around transformers and NLP for some time. Not that they don’t in ways share from each other
RobbinDeBank t1_j7ky0ju wrote
Nice try. What are you hiding at Google Brain?
RobbinDeBank t1_j7kykin wrote
Reddit refusing to implement any half decent search engine and force us to use Google instead
RobbinDeBank t1_j7kyu86 wrote
Sadly that’s how the world works. It is run by people with no technical knowledge.
gurdijak t1_j7kz6n3 wrote
Whatever happened there
emerging-tech-reader t1_j7kzd3i wrote
> https://arxiv.org/pdf/1706.03762.pdf the paper that made all this possible.
That's reaching IMHO. The original transformer was only around a few million parameters in size. It's not even in the realm of the level of ChatGPT.
You may as well say that MIT invented it as Googles paper is based on methods created by them.
bartturner t1_j7l64gq wrote
> OpenAI trained GPT on Microsoft Azure - it has zero to do with Google's TPU.
Geeze. ChatGPT would NOT exist if not for Google because the underlying tech was invented by Google.
OpenAI uses other people's stuff instead of inventing things themselves like Google.
Many of the big AI breakthroughs from the last decade+ have come from Google. GANs is another perfect example.
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
The TPUs are key in being able to bring a large language model to market at scale. Not training but the inference aspect.
drooobie t1_j7l7mo0 wrote
If you replaced the assistant in my google home with ChatGPT I would use it a lot more. Maybe I'm an exception, but I don't think so.
mirrorcoloured t1_j7l8e29 wrote
Wow I didn't expect numbers that high! I wonder if there's a large AA/reddit overlap, or if that's representative of search as a whole.
Google is showing a steady increase in reddit interest over time, and the second related query I see is "what is reddit". It's interesting that it's roughly linear and doesn't have the increasing growth that you'd expect from word-of-mouth spread.
WokeAssBaller t1_j7ladne wrote
Please without the transformer we would never be able to scale, not to mention all of this being built on BERT as well. Then a bunch of companies scaled it further including Google
PM_ME_YOUR_PROFANITY t1_j7lajpy wrote
Have you seen the work which connects ChatGPT to WolframAlpha?
HoneyChilliPotato7 t1_j7lf6nt wrote
I would prefer it this way. Otherwise reddit would have too much power and eventually become like Google search
MysteryInc152 t1_j7lg6rm wrote
I think he's basically saying AI's like chatGPT just output text at the base level. But that's really also a moot point anyway. You can plug in LLMs to be a sort of middle-man interface.
MysteryInc152 t1_j7lghig wrote
>No they're not. ChatGPT doesn't do anything, it just responds to you
Yes they are and you can get it to "do things" easily
RobbinDeBank t1_j7lih1k wrote
All hail our new big tech overlord Reddit (if they didn’t skip that class on search in college)
yaosio t1_j7lnkh9 wrote
If you look at what you.com does they cite the claims their bot makes by linking to the pages the data come from, but only sometimes. When it doesn't cite something you can be sure that it's just making it up. In the supposed Bing leak it was doing the same thing, citing it's sources.
If they can force it to always provide a source, and if it can't then it won't say it, that could fix it. However, there's still the problem that the model doesn't know what's true and what's false. Just because it can cite a source doesn't mean the source is correct. This is not something that the model can learn by being told. To learn by being told assumes that it's data is correct, which can't be assumed. A researcher could tell the model, "all cats are ugly", which is obviously not true, but the model will say all cats are ugly because it was taught that. Models will need to have a way to determine on their own what is true and what isn't true, and explain it's reasoning.
harharveryfunny t1_j7lu67f wrote
What underlying are you talking about? Are you even familiar with the "Attention" paper and it's relevance here? Maybe you think OpenAI use Google's Tensorflow? They don't.
GANs were invented by Ian Goodfellow while he was a student at. U.Montreal, before he ever joined Google.
No - TPUs are not key to deploying at scale unless you are targeting Google cloud. Google is a distant 3rd in cloud marketshare behind Microsoft and Amazon. OpenAI of course deploy on Microsoft Azure, not Google.
bartturner t1_j7lugv5 wrote
Geeze. Who do you think invented Transformers?
https://en.wikipedia.org/wiki/Transformer_(machine_learning_model)
NO!!! GANs were invented by Ian while he was working at Google. It is a pretty interesting story.
The vast majority of the major AI breakthroughs from the last decade+ came from Google.
OpenAI really does NOT do R&D. THey more use the R&D from others and mostly Google.
harharveryfunny t1_j7lvevz wrote
https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf
See page 1 footnote : "Goodfellow did this work as a UdeM student".
bartturner t1_j7lwvdt wrote
Ha! Go listen to Lex's podcast. Ian explains it all and it was ALL while working at Google.
harharveryfunny t1_j7ly2jz wrote
And then he travelled back in time to go write that paper at U.Montreal ?
Anyways, Schmidhuber was the real inventor ;-)
bartturner t1_j7lza3y wrote
Go listen to the podcast and Ian explains it all. Plus no Schmidhuber was NOT the inventor. It was Ian.
Go listen to the podcast and get back to me.
The key AI R&D from the last decade plus has all come from Google. Not from OpenAI and most definitely not from Microsoft.
hemphock t1_j7mtsvp wrote
yeah it's been like that for years. idk reddit is just a well moderated website with lots of small communities around a lot of topics. i think the lifecycle of its communities is the secret sauce. communities will peak and then get crappy (pretty reliably imo) but you can just leave and join new ones.
i dont think the 70% is a good sample though. its a poll of user responses to androidauthority.com
TheEdes t1_j7mym42 wrote
You're deluded if you don't think SEO doesn't exist in a worse way for LLMs, there's tons of papers about that, you can just mine for phrases that increases likelihoods just by observing outputs.
TheEdes t1_j7mysgv wrote
The other day I (mobile) searched for something related to meme stocks and the pills under the search bar showed the News followed by a button that said (+ Reddit), I clicked it and it literally just added reddit to my search term.
throwaway957280 t1_j7n2hfh wrote
The transformer.
[deleted] t1_j7n2la5 wrote
[removed]
here_we_go_beep_boop t1_j7nfi60 wrote
Sure it's just another ams race, doesn't mean that conventional search isn't broken tho
astrange t1_j7oduw3 wrote
This is wishful thinking. ChatGPT, being a computer program, doesn't have features it's not designed to have, and it's not designed to have this one.
(By designed, I mean has engineering and regression testing so you can trust it'll work tomorrow when they redo the model.)
I agree a fine tuned LLM can be a large part of it, but virtual assistants already have LMs and obviously don't always work that well.
crazymonezyy t1_j7ojv39 wrote
> But yes, anyone reading please don't use ChatGPT instead of google search unless you don't care about the responses being made up.
The general public is not reading this sub, and ChatGPT is being sold to them by marketing and sales hacks without this disclaimer. We're way past the point of PSAs.
danielbln t1_j7ovvql wrote
What we all want is that Alexa/Siri/Home have modern LLM conversational features, on addition to reliably turn on/off our lights or give us the weather. Ever since ChatGPT came out, interacting with a home assistance feels even more like pulling nails than it used to.
emerging-tech-reader t1_j7p3gn4 wrote
> Please without the transformer we would never be able to scale,
Without back propagation we wouldn't have transformers. 🤷♂️
IamNotMike25 t1_j7p535w wrote
chatGPT playground is (currently) free,
chatGPT API has 18$ of free credits:
---AI--- t1_j7qa9ec wrote
theLanguageSprite t1_j7rgvme wrote
Artificial general intelligence at this time of human development at this level of hardware, localized entirely within your warehouse?
[deleted] t1_j7skdi7 wrote
[removed]
techie0007 t1_j8c3mun wrote
Nhabls t1_j8d7uj1 wrote
Closed beta invites isn't shipping no
And bing isn't getting the full fledged version unless Microsoft feels like bleeding millions per day
[deleted] t1_ja6zwva wrote
[removed]
bballerkt7 t1_j7hdy06 wrote
No way they will offer it for free like open ai right?