Viewing a single comment thread. View all comments

Mmm36sa t1_is20xv6 wrote

I have a dataset of ~13k entries, 1025 features, 28 classes, cleaned. I did feature selection then scaling then fitted into an mlpclassifier and with some hyper parameters tuning got 75% score.

I’m looking for ideas to improve my results. Mplclassifier got the highest result in comparison to random forest, Hgradient boosting or svm on a stratified sample. Oh and I can’t use tensorflow on my hardware.

1

itsyourboiirow t1_iskrzq9 wrote

You could try PCA and a random forest or a K-nearest neighbors

1