Submitted by Feeling_Card_4162 t3_10sw0q1 in MachineLearning
ID4gotten t1_j74esuq wrote
I think you might be a little too in love with words like "neuromodulatory", while overlooking whether a simple deep FF network might be able to achieve what you're proposing. Just add a layer, nodes, and weights and you get this "modulatory" effect through linear combinations of the subsequent layers. Maybe I'm not grasping your intent, but I think if you can reduce it to math, you can then try to prove this is something that isn't already achieved through FF and backprop.
Feeling_Card_4162 OP t1_j74gohh wrote
The point is to be more efficient and dynamic than a normal FF network w/ backpropagation
dancingnightly t1_j76uuee wrote
In this goal, you may find Mixture of Experts architectures interesting.
I like your idea. I have always thought too that in ML we are trying to replicate one human on one task with the worlds data for that task, or one human on many tasks, more recently.
But older ideas and replicating societies and communication for one or many tasks could be equally or more effective. Which this heads in the direction of. There is a library called GeNN which is pretty useful for these experiments, although it's a little slow due to deliberate true-to-biology design.
Feeling_Card_4162 OP t1_j77oir0 wrote
Is that the mixture of experts sparsity method? I’ve looked into that a little bit before. It was an interesting and useful design for improving representational capacity but still imposes very specific constraints on the type of sparsity mechanisms available and thus limits the potential improvements to the design. I haven’t heard about the GeNN library. It sounds useful though, especially for theoretical understanding. I’ll check it out. Thanks for the suggestion 😊
Viewing a single comment thread. View all comments