Submitted by floppy_llama t3_1266d02 in MachineLearning
3z3ki3l t1_je9qt86 wrote
Reply to comment by TheAdvisorZabeth in [R] LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention by floppy_llama
If this isn’t copypasta, you’re having a manic episode. See a doctor, please.
Viewing a single comment thread. View all comments