Submitted by mrx-ai t3_zjud5l in MachineLearning
mgostIH t1_j02vbuy wrote
Reply to comment by gwern in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
All the layers are trained independently at the same time, you can use gradients but you don't need backprop because you can use explicit descriptions since each layer will have as a problem maximizing ||W * x||^2 for good samples, minimizing it for bad samples (each layer gets a normalized version of the previous output).
The issue I find in this is (besides generating good contrastive examples) that I don't understand how this would lead a big network to discover interesting structure: circuits require multiple layers to do something interesting, but here each layer greedily optimizes its own evaluation. In some sense we are hoping that the output of the past layers will orient things in a way that doesn't make it too hard for the next layers, which have only linear dynamics.
Viewing a single comment thread. View all comments