relevantmeemayhere t1_jcrotun wrote
Reply to comment by MysteryInc152 in [R] ChatGLM-6B - an open source 6.2 billion parameter Eng/Chinese bilingual LLM trained on 1T tokens, supplemented by supervised fine-tuning, feedback bootstrap, and RLHF. Runs on consumer grade GPUs by MysteryInc152
Mm, not really.
Bootstrapping is used to determine the standard error of estimates using resampling. From here we can derive tools like confidence intervals, or other interval estimates.
Generally speaking you do not use the bootstrap to tweak the parameters of your model. You use cross validation to do so.
Viewing a single comment thread. View all comments