Submitted by TobusFire t3_11fil25 in MachineLearning
currentscurrents t1_jajpjj7 wrote
It's not dead, but gradient-based optimization is more popular right now because it works so well for neural networks.
But you can't always use gradient descent. Backprop requires access to the inner workings of the function, and requires that it be smoothly differentiable. Even if you can use it, it may not find a good solution if your loss landscape has a lot of bad local minima.
Evolution is widely used in combinatorial optimization problems, where you're trying to determine the best order of a fixed number of elements.
Hostilis_ t1_jak681p wrote
>But you can't always use gradient descent. Backprop requires access to the inner workings of the function
Backprop and gradient descent are not the same thing. When you don't have access to the inner workings of the function, you can still use stochastic approximation methods for getting gradient estimates, e.g. SPSA. In fact, there are close ties between genetic algorithms and stochastic gradient estimation.
SpookyTardigrade t1_jaml6d9 wrote
Can you give a few examples of how genetic algorithms and stochastic gradient estimation are related?
Hostilis_ t1_jap97r5 wrote
https://www.nature.com/articles/s41467-021-26568-2
Try this article
Viewing a single comment thread. View all comments