Viewing a single comment thread. View all comments

profmori4rty t1_iu3xnb7 wrote

I can try the ELI5 (maybe more ELI18) explanation: Curve fitting is nothing but an optimization problem. You try to minimize the residuals (i.e. the squared distances of the searched curve to the training data points).

In the simplest case, your model is linear in the parameters (think polynomials, for example). Most people know the regression line, that is, a line that fits the data points as well as possible. The parameters of such a straight line (and higher-order polynomials) can be estimated in just one step using the OLS estimator.

But here in this case the model is non-linear in the parameters (look at the model function in the first plot) - it is composed of transcendental functions (sine and exp). In such cases, the optimum cannot be found analytically in one step. So we need an iterative approach with gradient descent. There are some methods that can achieve exactly that.

A very classical and simple algorithm is the Gauss-Newton algorithm, but here a modern approach called the Levenberg-Marquardt algorithm was used. The difference is that an underlying optimization problem is solved to determine the search direction.

3