ScientiaEtVeritas

ScientiaEtVeritas t1_jcbiupk wrote

It's not only about the model releases, but also the research details. With them, others can replicate the results, and improve on them and that might also lead to more commercial products and open-sourced models that have a less restrictive license. In general, AI progress is certainly the fastest when everyone shares their findings. On the other hand, with keeping and patenting them, you actively hinder progress.

9

ScientiaEtVeritas t1_jcahkze wrote

I think we should value much more what Meta & Google are doing. While they also potentially don't release every model (see Google's PaLM, LaMDA) or only with non-commercial licenses after request (see Meta's OPT, LLaMA), they are at least very transparent when it comes to ideas, architectures, trainings, and so on.

OpenAI itself changed a lot from being open to being closed but what's worse is that OpenAI could be the reason that the whole culture around AI research changes as well, which is sad and pretty ironic when we consider its name. That's why I'm generally not very supportive of OpenAI. So, as a research community, we should largely ignore OpenAI -- in fact, they proactively opted out of it, and instead let's value and amplify open research from Meta, Google, Huggingface, Stability AI, real non-profits (e.g., EleutherAI), and universities. We need counterbalance now.

580