[P] RWKV 14B Language Model & ChatRWKV : pure RNN (attention-free), scalable and parallelizable like Transformers Submitted by bo_peng t3_10eh2f3 on January 17, 2023 at 4:54 PM in MachineLearning 19 comments 110
blimpyway t1_j4ulemc wrote on January 18, 2023 at 10:52 AM Prior to this, have you experimenting with smaller (== more manageable) variants of this model or previous variants were attempted directly at this scale? Permalink 3
Viewing a single comment thread. View all comments