aleph__one
aleph__one t1_izxu46b wrote
Reply to comment by arhetorical in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
No neuromorphic chip. Main reason is interpretability.
aleph__one t1_izwyrcf wrote
Reply to comment by arhetorical in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Yea the surrogate gradient stuff works ok, others that are decent 1) STDP variants, especially dopamine modulated STDP (emulates RL-like reinforcement) 2) for networks < 10M params, evolution strategies and similar zero-order solvers can work well operating directly on the weights 3) variational solvers can work if you structure the net + activations appropriately
aleph__one t1_izwxy4q wrote
Reply to comment by captain_arroganto in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Unfortunately, beginner literature on this stuff is virtually nonexistent. Your best bet is to read papers and experiment.
aleph__one t1_izwos7v wrote
Reply to comment by sea-shunned in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
I use a custom SNN variant in production on real use cases, and the way we train those is very similar to the FF proposal. Most people just assume SNNs are impossible to train because SGD isn’t immediately available, when in reality there are dozens of ways to train SNNs to achieve solid performance.
aleph__one t1_j01c1kf wrote
Reply to comment by ChuckSeven in [D] G. Hinton proposes FF – an alternative to Backprop by mrx-ai
Yea I was thinking the same thing. I teach some of this stuff at the graduate level but it’s tough for newcomers to get used to even in that setting.