Viewing a single comment thread. View all comments

PassionatePossum t1_jalnb1q wrote

I think, my professor summarized it very well: "Genetic algorithms is what you do when everything else fails."

What he meant by that is, that they are very inefficient optimizers. You need to evaluate lots and lots of configurations because you are stepping around more of less blindly in the parameter space and you are only relying on luck and a few heuristics to improve your fitness. But their advantage is that they will always work as long as you can define some sort of fitness function.

If you can get a gradient, you are immediately more efficient because you already know in which direction you need to step to get a better solution.

But of course there is room for all algorithms. Even when you can do gradient descent, there are problems where it quickly gets stuck in a local optimum. There are approaches how to "restart" the algorithm to find a better local optimum. I'm not that familiar with that kind of optimization but it is not inconceivable that genetic algorithms might have a role to play in such a scenario.

24

sea-shunned t1_jaqosa2 wrote

In my experience, if you are "stepping around more or less blindly" then the problem & EA have not been properly formulated. In general of course, if a gradient is available then gradient descent will do a better job >99% of the time.

Though with a bit of domain knowledge, and/or some careful design of the objective function(s), variation operators etc., an EA can be a pretty efficient explorer of the space. It's nichely-applicable, but when done properly it's far from blind.

1