Submitted by xylont t3_zlm587 in MachineLearning
The standard method is to normalize the entire dataset (the training part) then send it to the model to train on. However I’ve noticed that in this manner the model doesn’t really work well when dealing with values outside the range it was trained on.
So how about normalizing each sample between a fixed range, say 0 to 1 and then sending them in.
Of course the testing data and the values to predict on would also be normalized in the same way.
Would it change the neural network for the better or worse?
robot_lives_matter t1_j062iq1 wrote
I normalise each sample almost always now. Seems to work great for me.