amassivek
amassivek t1_j6opolw wrote
Reply to comment by blackkettle in [R] The Predictive Forward-Forward Algorithm by radi-cho
As the depth and width of the network grows, the computational advantage grows. Forward only learning algorithms, such as FF and PFF, have this advantage.
There is also a compatability advantage: forward only learning algorithm work on limited resource devices (edge devices) and neuromorphic chips.
For an analysis on efficiency, refer to section IV: https://arxiv.org/abs/2204.01723
We demonstrate reasonable performance on cifar-10 and cifar-100, in that same paper (Section IV). So, the performance gap may decrease over time.
For a review of forward only learning, with an explaination on why it has efficiency and compatibility: https://amassivek.github.io/sigprop
amassivek t1_izoh41k wrote
Reply to comment by master3243 in [R] The Forward-Forward Algorithm: Some Preliminary Investigations [Geoffrey Hinton] by shitboots
There is a framework for learning with forward passes, a friendly and thorough tutorial: https://amassivek.github.io/sigprop .
The most interesting insights from the framework:
- This algorithm provides an explanation for how neurons in the brain without error connections receive learning signals.
- It works for continuous networks with hebbian learning. This provides evidence for this algorithm as model of learning in the brain.
- It works for spiking neural networks using only the membrane potential (aka voltage in hardware). This supports applying this algorithm for learning on neuromorphic chips.
The Signal Propagation framework paper: https://arxiv.org/abs/2204.01723 . The Forward-Forward algorithm is an implementation of this framework.
I am an author of this work. I was presenting this work at a reading group when one of the members pointed out the connection between signal propagation and forward forward.
amassivek t1_j6owo9f wrote
Reply to comment by blimpyway in [R] The Predictive Forward-Forward Algorithm by radi-cho
Here is an reversed view, where ANNs provide inspiration for neuroscience to investigate the brain. Forward learning models provide a new perspective on how neurons without "feedback" or "learning" connections have the ability to still learn, a common scenerio. We make note of this and show the conceptual framework for forward learning: https://arxiv.org/abs/2204.01723. This conceptual framework is applicable to neuroscience models, providing an investigative path forward.