Viewing a single comment thread. View all comments

Nhabls t1_j6urk1b wrote

This isn't really relevant. Newer, larger LLMs generalize better than smaller ones yet they also regurgitate training data better. it's not exclusive

10

ItsJustMeJerk t1_j6uymag wrote

You're right, it's not exclusive. But I believe that while the the absolute amount of data memorized might go up with scale, it occupies a smaller fraction of the output because it's only used where verbatim recitation is necessary instead of as a crutch (I could be wrong though). Anyway, I don't think that crippling the model by removing all copyrighted data from the dataset is a good long-term solution. You don't keep students from plagiarizing by preventing them from looking at a source related to what they're writing.

4