Submitted by abhitopia t3_ytbky9 in MachineLearning
liukidar t1_iwc3rkb wrote
Reply to comment by abhitopia in [Project] Erlang based framework to replace backprop using predictive coding by abhitopia
> The interesting bit for me is not the exact correspondence with PC (as described in Neuroscience) but rather following properties that lend it suitable for asynchronous paralellisation is Local Synaptic Plasticity which I believe still holds good
Indeed this still holds with all the definitions of PC out there (I guess that's why very different implementations such as FPA are still called PC). In theory, therefore, it is possible to parallelise all the computations across different layers.
However, it seems that deep learning frameworks such as PyTorch and JAX are not able to do this kind of parallelization on a single GPU (I would be very very glad if someone who knows more about this would like to have a chat on the topic; maybe I'm lucky and some JAX/Pytorch/Cuda developers stumble upon this comment :P)
miguelstar98 t1_iwoeafv wrote
🖒Noted. I'll take a look at it when I get some free time. Although someone should probably make a discord for this....
Viewing a single comment thread. View all comments