quasiproductive

quasiproductive t1_iqyl8rl wrote

Is this going to make the function approximation strength of NNs any better than what it already is?

Probably not. It is already quite good. I don't think not using polynomial building blocks is the bottleneck.

I have seen very complicated high dimensional manifolds in different settings (physics, finance etc.) being learnt by simple but sometimes huge MLPs. Unless there is a strong inductive reason/bias in how a polynomial layer can help contribute towards in an ML problem, there isn't any strong reason to use it. Indeed, overfitting isn't function approximation gone bad rather the opposite as anyone who has trained a NN will know.

1