Viewing a single comment thread. View all comments

fundamental_entropy t1_jasohit wrote

Fine-tuning flan T5 xl or XXL can give you decent results. From my experience these are best open source models to fine-tune on . However they won't match results of larger models like gpt 3.5. But if you have millions of such reviews then chatgpt or gpt 3.5 may not be financially feasible.

1

average-joee OP t1_jaspyf6 wrote

Since you mentioned Huddingface, What do you think of Pegasus for Abstractive Summarization?

1

fundamental_entropy t1_jasqy64 wrote

Flan models are trained in almost every open dataset available in Generic English tasks. Recent research suggests models trained to perform multiple tasks (in fact ratios of different tasks too affect see flan 2022 paper) are better than models trained only on a given task. Flan T5 beats T5 in almost every task and sometimes Flan T5 XXL matches gpt3 type of prompt generation.

3