Submitted by TobusFire t3_11fil25 in MachineLearning
FinancialElephant t1_jaliqsh wrote
Genetic optimization might be dead in most cases. I think a lot of the ideas aside from optimization algorithms are still relevant.
I've found GP techniques can yield parsimonious models. A lot of the big research these days is on big models, but GP seems good for small, parsimonious, and elegant models. Good for low data regimes, specialized problems, and problems where you have expert knowledge you can encode. Generally speaking I like working with GP becuase you end up with a parsimonious and interpretable model (opposite of a lot of NN research).
In practice I've found importance sampling methods to work about as good as genetic optimization for optimizing GP trees/grammars for the small amount of work I did with them. I haven't found either method to edge out by much, but it could depend on the problem.
I don't know if this is considered GP (or GA) without a genetic optimization method. However I think we can say that the notion of optimizing a symbolic tree or grammar was heavily developed within GP, even if today you may use some monte carlo optimization method in practice.
Viewing a single comment thread. View all comments