Submitted by thomasahle t3_118gie9 in MachineLearning
thomasahle OP t1_j9iq4rz wrote
Reply to comment by cthorrez in Unit Normalization instead of Cross-Entropy Loss [Discussion] by thomasahle
Should have said Accuracy.
Only MNist though. Went from 3.8% error on a simple linear model to 1.2%. In average. With 80%-20% train test split. So in no way amazing, just interesting.
Just wondered if other people had experimented more with it, since it's also a bit faster training.
cthorrez t1_j9ir0lx wrote
Have you tried it with say an MLP or small convnet on cifar10? I think that would be the next logical step.
Viewing a single comment thread. View all comments