UnnAmmEdd t1_j0677tu wrote
>So how about normalizing each sample between a fixed range, say 0 to 1 and then sending them in.
How would you normalize each sample?
mr_birrd t1_j06rjju wrote
min max i suppose
UnnAmmEdd t1_j06rsxu wrote
But you need to have min and max values from somewhere (training dataset for example?). This is the point, why you have to normalize entire dataset :P
mr_birrd t1_j06s0mg wrote
Well uint 8 goes to 255, so there you take those values. Images come in that format often but the ReLUs and other activations hate it so better take it to a 0-1 range. Btw min max just subtracts the min of the sample and then divide by max. I don't see the problem.
Edit: Also think about why we do BatchNormalization
UnnAmmEdd t1_j06vfhf wrote
Okay, there is nowhere written that we are working on images. If yes, then ofc dividing by 255 doesn't seem to be wrong, it is usually done when casting uint8 to float.
But if doesn't make assumption, that the input is an image (it may be an embedding from token in NLP or a row if we work with tabular data), then input values may be from (-inf, +inf), so we need min/max to put boundaries on this interval.
mr_birrd t1_j06vp16 wrote
yeah you take the min of the sample and the msx of the sample. If that makes sense is another question.
Viewing a single comment thread. View all comments