Viewing a single comment thread. View all comments

liukidar t1_iwc3rkb wrote

> The interesting bit for me is not the exact correspondence with PC (as described in Neuroscience) but rather following properties that lend it suitable for asynchronous paralellisation is Local Synaptic Plasticity which I believe still holds good

Indeed this still holds with all the definitions of PC out there (I guess that's why very different implementations such as FPA are still called PC). In theory, therefore, it is possible to parallelise all the computations across different layers.

However, it seems that deep learning frameworks such as PyTorch and JAX are not able to do this kind of parallelization on a single GPU (I would be very very glad if someone who knows more about this would like to have a chat on the topic; maybe I'm lucky and some JAX/Pytorch/Cuda developers stumble upon this comment :P)

3

miguelstar98 t1_iwoeafv wrote

🖒Noted. I'll take a look at it when I get some free time. Although someone should probably make a discord for this....

2

abhitopia OP t1_iwrtr60 wrote

It's a good idea. I am currently still reading the papers on the subject. But I can create a discord if it's helpful

1

liukidar t1_iwu5qgv wrote

Sounds good. I can provide more details about the issue in the case.

1