Submitted by netw0rkf10w t3_10rtis6 in MachineLearning
puppet_pals t1_j6ygho0 wrote
ImageNet normalization is an artifact of the era of feature engineering. In the modern era you shouldn’t use it. It’s unintuitive and overfits the research dataset.
nicholsz t1_j6yniui wrote
With data augmentation techniques (especially contrast or luminance randomization), normalizing would end up being a no-op in the end, right?
netw0rkf10w OP t1_j6z15t0 wrote
I think normalization will be here to stay (maybe not the ImageNet one though), as it usually speeds up training.
nicholsz t1_j6z1jgm wrote
Oh I meant fitting to the statistics of ImageNet / the training dataset. There's always got to be some kind of normalization
netw0rkf10w OP t1_j6zbbkb wrote
Agreed!
puppet_pals t1_j701uqt wrote
>I think normalization will be here to stay (maybe not the ImageNet one though), as it usually speeds up training.
the reality is you are tied to the normalization scheme of whatever you are transfer learning from. (assuming you are transfer learning). Framework authors and people publishing weights should make normalization as easy as possible; typically via a 1/255.0 rescaling operation (or x/127.5 - 1, I'm indifferent though I opt for 1/255 personally)
netw0rkf10w OP t1_j6zb957 wrote
If I remember correctly it was first used in AlexNet, which started the deep learning era though. I agree that it doesn't make much sense nowadays, but it's still be used everywhere :\
Viewing a single comment thread. View all comments