[R] Is there any work being done on reduction of training weight vector size but not reducing computational overhead (eg pruning)? Submitted by Moose_a_Lini t3_yjwvav on November 2, 2022 at 5:48 AM in MachineLearning 23 comments 22
garridoq t1_iuqkmbo wrote on November 2, 2022 at 8:50 AM Recurrent Parameter Generators https://arxiv.org/abs/2107.07110 could be interesting for you. The idea is not to prune the architecture, but instead use a limited bank of parameters that generates the networks parameters Permalink 2
Viewing a single comment thread. View all comments