Submitted by augusts99 t3_11clfxx in deeplearning
Hey all! I'm new with deep learning and am asking this to see if anyone has experience or suggestions regarding this.
I'm working with two types of models: model 1 makes fairly good predictions but fluctuates a lot over time, model 2 (LSTM, RNN) has the ability to understand trends more.
Through some experimenting I found that combining the two models can produce decent results as it sort of combines both perks: decent order of value prediction by model 1, decent trend prediction by model 2. An LSTM standalone does not perform that well btw.
One of the inputs for model 2 (the LSTM model) is thus the predicted sequence made by model 1. This means that for training I input a sequence that is already the output... My reasoning was that if I do this, the model will learn not to adjust this sequence.
Therefore, I continued with only using the mean of this sequence and furthermore some synthetic data, for which I reasoned that the model should also learn to adjust the value of the input if necessary.
I kinda did this by just experimenting and I feel like it lacks some proper theory.
About training a deep learning model that inputs a sequence or value that is already correct: do some people know what exactly is theory behind this? What is common practice?
Thanks in advance! I of course can elaborate if needed.
augusts99 OP t1_ja3n6aa wrote
Perhaps I should elaborate that the predicted sequence made by model 1 is not the only sequence of the LSTM model. I also use different variable sequences for which I hope the LSTM uses these to understand the correct trends.