Submitted by Steve_Sizzou t3_zasxg5 in MachineLearning
dancingnightly t1_iyolsht wrote
For a neural network, no you want to train it to represent the raw data (or near to raw like FFT) as other answers mention.
You could create a simple baseline Logistic regression model to check this. When you think about it, part of that model calculates the mean of each feature (128 electrodes * t times) as part of it's model(for binary classification). So even in this case, providing the mean isn't useful.
What would benefit from feature engineering?
If you have excel-table-style or low amount of data.
A classification Decision Tree is more likely to benefit - but this still usually only works if you can do some preprocessing with distances in embedding or other mathematical spaces, or augment the data format (e.g. appending presence of POS tags for text data, useful for logistic regression too).
A decision tree (usually) can't so easily implement things like totalling individual features, so things like totalling can on occasion be useful when you have low data numbers (although in theory, an ensemble of trees[which is the default nowadays for most] can approximate this and would if useful) - another example would be e.g. precalculating profit margin from the costs, net/gross profit for a company prediction dataset.
Viewing a single comment thread. View all comments