Submitted by pm_me_your_pay_slips t3_10r57pn in MachineLearning
Nhabls t1_j6urk1b wrote
Reply to comment by ItsJustMeJerk in [R] Extracting Training Data from Diffusion Models by pm_me_your_pay_slips
This isn't really relevant. Newer, larger LLMs generalize better than smaller ones yet they also regurgitate training data better. it's not exclusive
ItsJustMeJerk t1_j6uymag wrote
You're right, it's not exclusive. But I believe that while the the absolute amount of data memorized might go up with scale, it occupies a smaller fraction of the output because it's only used where verbatim recitation is necessary instead of as a crutch (I could be wrong though). Anyway, I don't think that crippling the model by removing all copyrighted data from the dataset is a good long-term solution. You don't keep students from plagiarizing by preventing them from looking at a source related to what they're writing.
Viewing a single comment thread. View all comments