Submitted by Feeling_Card_4162 t3_10sw0q1 in MachineLearning
Feeling_Card_4162 OP t1_j74gohh wrote
Reply to comment by ID4gotten in [R] Topologically evolving new self-modifying multi-task learning algorithms by Feeling_Card_4162
The point is to be more efficient and dynamic than a normal FF network w/ backpropagation
dancingnightly t1_j76uuee wrote
In this goal, you may find Mixture of Experts architectures interesting.
I like your idea. I have always thought too that in ML we are trying to replicate one human on one task with the worlds data for that task, or one human on many tasks, more recently.
But older ideas and replicating societies and communication for one or many tasks could be equally or more effective. Which this heads in the direction of. There is a library called GeNN which is pretty useful for these experiments, although it's a little slow due to deliberate true-to-biology design.
Feeling_Card_4162 OP t1_j77oir0 wrote
Is that the mixture of experts sparsity method? I’ve looked into that a little bit before. It was an interesting and useful design for improving representational capacity but still imposes very specific constraints on the type of sparsity mechanisms available and thus limits the potential improvements to the design. I haven’t heard about the GeNN library. It sounds useful though, especially for theoretical understanding. I’ll check it out. Thanks for the suggestion 😊
Viewing a single comment thread. View all comments