the_mighty_skeetadon

the_mighty_skeetadon t1_jccdzgr wrote

Many of the interesting developments in deep learning have in fact made their way to Google + FB products, but that those have not been "model-first" products. For example: ranking, personalization, optimization of all kinds, tech infra, energy optimization, and many more are driving almost every Google product and many FB ones as well.

However, this new trend of what I would call "Research Products" which are light layers over a model -- it's a different mode of launching with higher risks, many of which have different risk profiles for Google-scale big tech than it does for OpenAI. Example: ChatGPT would tell you how to cook meth when it first came out, and people loved it. Google got a tiny fact about JWST semi-wrong in one tiny sub-bullet of a Bard example, got widely panned and lost $100B+ in market value.

14

the_mighty_skeetadon t1_iszprhg wrote

That can't be the only method, because if your model for generating fake papers differs significantly from somebody else's model, you will be both unable to detect those fake papers and unable to detect that you're failing.

Better is to have fake papers rejected from journals labeled thusly and to synthetically generate more fake papers with a wide variety of known approaches.

1