Submitted by xylont t3_zlm587 in MachineLearning
The standard method is to normalize the entire dataset (the training part) then send it to the model to train on. However I’ve noticed that in this manner the model doesn’t really work well when dealing with values outside the range it was trained on.
So how about normalizing each sample between a fixed range, say 0 to 1 and then sending them in.
Of course the testing data and the values to predict on would also be normalized in the same way.
Would it change the neural network for the better or worse?
killver t1_j069atv wrote
This is already done in computer vision most of the time by just dividing the pixels by 255. You can also do actual sample normalization by let's say dividing by maximum value of the sample.
But as always there is no free lunch. Just try all options and see what works better for your problem.