Submitted by MLNoober t3_xuogm3 in MachineLearning
bushrod t1_iqxklya wrote
Reply to comment by jms4607 in [D] Why restrict to using a linear function to represent neurons? by MLNoober
What's the benefit of neural nets being able to approximate analytic functions perfectly on (-inf, inf)? Standard neural nets can approximate to arbitrary accuracy on a bounded range, and training data will always be bounded. If you want to deal with unbounded ranges, there are various ways of doing symbolic regression that are designed for that.
jms4607 t1_iqxuph2 wrote
Generalization out of distribution might be the biggest thing holding back ML rn. It’s worth thinking about whether the priors we encode in nns now are to blame. A large mlp is required just to approximate a single neuron. Maybe the unit additive nonlinearity we are using now is too simple. I’m sure there is a sweet spot between complex interactions/few neurons and simple interactions/many neurons.
Viewing a single comment thread. View all comments