Competitive-Rub-1958 t1_iwqmaic wrote
Reply to comment by ChuckSeven in [R] RWKV-4 7B release: an attention-free RNN language model matching GPT-J performance (14B training in progress) by bo_peng
It does need more parameters to compensate (For instance, it has nearly a billion more parameters than GPT-J-6B without substantial performance gains) while losing out on LAMBADA (Ignoring the weighted average as I don't really understand the point of weighing it, since it distorts the metrics).
Its an extremely interesting direction, but I fear as you scale this model the scaling plot might start to flatten out - much like other RNN rewrites/variants. Hope further research is able to pinpoint the underlying issue and fix it. Till then, best of luck to OP! 👍
bo_peng OP t1_iwua2xh wrote
RWKV 7B is faster than GPT 6B, and RWKV scales great actually :)
If you check the table, RWKV is better than GPT-neo on everything at 3B (while smaller RWKV lags behind on LAMBADA).
But GPT-J is using rotary and thus quite better than GPT-neo, so I expect RWKV to surpass it at 14B.
Moreover RWKV 3B becomes stronger after trained for more tokens and I am doing it for the 7B model too.
Viewing a single comment thread. View all comments