Submitted by iknowjerome t3_yrfzcf in MachineLearning
Gaudy_ t1_ivuhtyx wrote
Thanks a lot, feels like every 6 months some company writes some article on how they fixed a great number of annotation errors of various public datasets. Yet they always fail to release it, not so this time, looking forward to testing it out.
iknowjerome OP t1_ivuua0z wrote
>company
Looking forward to hearing what you think. Just to be clear. I'm always reluctant to calling one dataset better than another because it always depends on what you're trying to achieve with it. With Sama-Coco, we were trying to fix some of the misclassification errors when possible but we also put a significant amount of effort in drawing precise polygons around the objects of interest because of experiments we are currently running. And, of course, we wanted to capture as many instances of the COCO classes as possible. This resulted in a dataset with close to 25% more object instances than the original COCO 2017 dataset. But it's not to say that we solved all "errors" in COCO. :)
Viewing a single comment thread. View all comments