Submitted by theanswerisnt42 t3_10wtumf in MachineLearning
currentscurrents t1_j7sri62 wrote
Reply to comment by katadh in [Discussion] Cognitive science inspired AI research by theanswerisnt42
SNN-ANN conversion is kludge - not only do you have to train an ANN first, it means your SNN is incapable of learning anything new.
Surrogate gradients are better! But they're still non-local and require backwards passes, which means you're missing out on the massive parallelization you could achieve with local learning rules on the right hardware.
Local learning is the dream, and would have benefits for ANNs too: you could train a single giant model distributed across an entire datacenter or even multiple datacenters over the internet. Quadrillion-parameter models would be technically feasible - I don't know what happens at that scale, but I'd sure love to find out.
Viewing a single comment thread. View all comments