Submitted by cautioushedonist t3_yto34q in MachineLearning
YamEnvironmental4720 t1_iwge3d2 wrote
A couple of years ago. I was interested in the classification problem for stock price movements. The goal was to predict if the stock yielded positive returns in the future 25-30 days, using daily data of the type provided by Yahoo Finance. I did some feature engineering to derive classical indicators, their moving averages over different time periods and certain normalizations of them so as to have features ranging between 0 and 1. I experimented with various thresholds x and discovered that I get better predictive power by labelling vectors by 1 if the stock returns is at least x %, for some certain x close to 1, than by simply choosing x=0, which means looking only at the direction of the price movement. One drawback, however, was that there was not a clear correlation between the profits and the accuracy of the model: a false positive of, say x/2 %, obviously affected the accuracy in a negative way while it at the same time contributed postively to the profit. Moreover, not defing a recommendation to be a prediction of at least 0.5, but rather something between 0.6 and 0.7 (depending on, for instance the stock index), significantly reduced the number of false positives with negative price movements.
I would still be interested in the question of finding suitable metrics, other than the accuracy, for measuring the performance of the classification algorithm.
Viewing a single comment thread. View all comments