Viewing a single comment thread. View all comments

cthorrez t1_j9iq35y wrote

> test loss decreased

What function are you using to evaluate test loss? cross entropy or this norm function?

1

thomasahle OP t1_j9iq4rz wrote

Should have said Accuracy.

Only MNist though. Went from 3.8% error on a simple linear model to 1.2%. In average. With 80%-20% train test split. So in no way amazing, just interesting.

Just wondered if other people had experimented more with it, since it's also a bit faster training.

2

cthorrez t1_j9ir0lx wrote

Have you tried it with say an MLP or small convnet on cifar10? I think that would be the next logical step.

1