Viewing a single comment thread. View all comments

Akimbo333 t1_j77zzkk wrote

Reply to comment by Nmanga90 in Infinite police by crap_punchline

But I don't Understand. How does following directions make it better?

1

Nmanga90 t1_j784rkz wrote

What exactly don’t you understand?

Following instructions makes it better because these models are by nature predictive. They don’t understand what you are saying, and are created to predict the next text after the input. By nature, the models basically have an implicit prompt that says “what follows this input:”. This is much less useful than following instructions, because in the real world, there is less money/productivity to be gained by predicting the next text sequence, and more to be gained by completing tasks that you ask it to.

1

Akimbo333 t1_j7853be wrote

Oh ok. I see now thanks for explaining. Maybe they'll make a solveGPT that can actually solve things someday lol!

1

Nmanga90 t1_j785o7a wrote

Haha, AWS actually just released one of these 2 days ago that’s waaaaay smaller but actually outperforms GPT-3 on reasoning tasks.

Here is the link: https://arxiv.org/abs/2302.00923

1

Akimbo333 t1_j786lmu wrote

Wow that's so cool! To get Proto AGI, we definitely need an all in one multimodal LLM

1

Nmanga90 t1_j78du1g wrote

Just out of curiosity, what is your education on the subject? I find it kind of strange or I guess inconsistent that you’re talking about multimodal LLMs and their necessity, but don’t know about OPT, InstructGPT, or why an Instruct model would be better than a predictive model

1

Akimbo333 t1_j78i2ia wrote

I have a limited programming back ground. But I was out of date with GPT models. But for a time I thought that it would be better to have a predictive model that can plan ahead. Atleast that was my mindset.

1