dluther93
dluther93 t1_itbglnd wrote
Reply to comment by Bonsanto in [D][R] Staking XGBOOST and CNN/Transformer by MichelMED10
Nothing I’m able to pass off publicly unfortunately. Just build a cnn, then concat the outputs into your original dataset :)
dluther93 t1_it982ce wrote
I've done this before for multi-modal classification tasks.
Train CNN end-to-end, take the layer before last for a dense vector of embeddings.
use that dense vector as a feature set alongside my tabular data in an XGBoost or Catboost model. Boom
Easy to do on a local machine, cumbersome to try and reliably deploy this model though.
dluther93 t1_itc1ldm wrote
Reply to comment by abstract000 in [D][R] Staking XGBOOST and CNN/Transformer by MichelMED10
It was significant to us. Our base case is the xgboost model with tabular data only. We were looking at ways to augment our tabular performance, not improve imaging performance. It was a method of feature engineering for the problem.