Submitted by groman434 t3_103694n in MachineLearning
comradeswitch t1_j33ntbi wrote
Absolutely. Because it's never just the input from humans- presented with an image and a label for it given by a user, the model is not limited to learning only the relationships that the human used to generate the label- the image is right there, after all. So when all goes well, the model can learn relationships in the data that humans are unable to because the human labels are used to guide learning on the source material.
Additionally, there are many ways to allow a model to treat labels as 100% true (i.e. the word of God) but allow for some incorrect labels. In which case, it's entirely possible for the model to do better than the human(s) did even on the same data.
Viewing a single comment thread. View all comments