Submitted by hardmaru t3_ys36do in MachineLearning
zimonitrome t1_iwbmzoq wrote
Reply to comment by maybelator in [R] ZerO Initialization: Initializing Neural Networks with only Zeros and Ones by hardmaru
Huber loss let's go.
maybelator t1_iwbpkjo wrote
Not if you want true sparsity !
zimonitrome t1_iwbst8p wrote
Can you elaborate?
maybelator t1_iwbxutj wrote
The Huber loss encourages the regularized variable to be close to 0. However, this loss is also smooth: the amplitude of the gradient decreases as the variable nears its stationary point. In consequence, it will have many coordinates close to 0 but not exactly. Achieving true sparsity requires thresholding which adds a a lot of other complications.
In contrast the amplitude of the gradient of the L1 norm (absolute value in dim 1) remain the same no matter how close it gets to 0. The functional has a kink (the subgradient contains a neighborhood of 0). In consequence, if you used a well-suited optimization algorithm, the variable will have true sparsity, i.e. a lot of exact 0.
zimonitrome t1_iwc14i5 wrote
Wow thanks for the explanation, it does make sense.
I had a pre-conception that all optimizers dealing with any linear functions (kinda like L1 norm) still produce values close to 0.
I can see someone disregarding tiny values when using said sparsity (pruning, quantization) but didn't think that it would be exactly 0.
Viewing a single comment thread. View all comments