Submitted by TobusFire t3_11fil25 in MachineLearning
BigBayesian t1_jam6z7u wrote
Genetic algorithms are good, as you said, when you really understand the space and can come up with a really good candidate generation system. They’re okayish (or, the same as everything else) when you have no understanding of the space at all, and you’re just totally guessing. They can’t latch onto a curve in design space as well as things that look at a simpler gradient can. So maybe they’re best used for really complex spaces where gradient based methods don’t do well. The kind of places you’d use Gibbs sampling, or general optimization algorithms.
So, basically, they’re useful when you have good feature engineering already done, like many methods that have fallen out of vogue in the age of letting algorithms and data do your feature engineering for you. And they’re as good a shot in the dark as any when standard methods fail and you’ve got no clue how to proceed.
So, yeah, the number of times genetic algorithms are the “right” choice is pretty limited these days.
Viewing a single comment thread. View all comments