The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Ensemble of models with different training periods

NoelNoel Member Posts: 82 Maven
edited October 2019 in Help
Hi All- ( @IngoRM@yyhuang@varunm1@hughesfleming68@tftemme, @mschmitz, @lionelderkrikor )

Say you had a model whose testing performance on timeseries data changed as you changed the length of the training period. A model trained with one particular training period length, however, was not consistently the top performer... On the same test sets, sometimes the model with 2 years of training performed best, sometimes the one with 1 year, or 18mos, etc.

I want to train a meta model that will choose which training period length-model to apply to the test set at any given time.

I looked at using the stacking operator, but, situated inside the validation process, varying length training periods won't work.

Can someone please suggest an alternate methodology (including meta model type)?

Thank you!

-Noel

Answers

  • Telcontar120Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn
    I have perhaps a more philosophical question for you here---do you typically consider the length of the training period as a kind of model hyperparameter, or as a given for your use case?

    When dealing with time series data, I tend to treat it more as a given, based on both the availability of data in my sample as well as my a priori assumptions about the cyclical/periodic nature of the data itself.  So if I think there is annual seasonality, I am going to try to make sure that my training data includes only complete annual cycles, for example.

    I am not entirely sure that it is sensible to be considering length of the training period as a free parameter subject to optimization like other model hyperparameters.  By analogy, at one extreme, if there really were no difference in the length of the period other than random variations, then this feels a bit like optimizing the random seed to get different training vs testing samples (if using split validation) by picking the one with the best performance.  This is not likely to produce better results in the long term.

    Instead, it seems preferable to start with an analysis of the series data itself.  Why would the length of the modeling performance period have a significant impact on the resulting model performance?   If it does, there is probably some underlying dynamic relating to cyclical patterns that you want to understand and perhaps incorporate into feature engineering as well.  If is does not, then you probably don't want to vary it and pick the best one based on random noise.

    So, at the end of the day, I am suggesting it may be better to start with a defined modeling period, based on your understanding of the domain and the problem, and then choose the best performing model and parameters given this window.  Otherwise I fear you may experience unforeseen performance deviations when you go to apply a model in production if you have "optimized" the length of the training window blindly.
    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • NoelNoel Member Posts: 82 Maven
    Thanks for your reply, @Telcontar120. I appreciate the time that goes into a thoughtful response like that.

    I understand the point you raise.

    Last week, I had an issue wherein I had trained a model on 5+ years of data (with great training results) and then tested it on the subsequent year and the precision/accuracy was terrible. I got two responses in the community that were helpful. One was that overfit models are generally overtrained and the second was that so-called "signals" in financial timeseries are short-lived.

    I had no idea what the "right" training period length was, but I experimented with shorter ones. I also decided to decrease the testing period from a year to a month or two weeks. The results improved. I then noticed that at different points in the yearlong testing period (accomplished piecewise) performance would fall off. I went back to those spots and experimented with other training period lengths thinking that "regime changes" occur every so often (at which point either different factors are influencing the attribute of interest or the magnitude of their impact shifts).

    My first response was to see if there was an optimal training period length that I could try to establish. I then thought that, due to the regime changes, the length of training periods that will be efficacious will vary... and there will be times when there will be behavior that cannot be explained by past events/relationships/etc. (I may go there at some point, but for now, I'm tabling the notion of building a regime-change detection model.)

    My poor man's approach is to build a bunch of models with different training period lengths and then train a meta model to decide which one of the training periods to "use"; I can't wait to see the results.

    Thanks again and have a great weekend.
  • hughesfleming68hughesfleming68 Member Posts: 323 Unicorn
    edited October 2019
    Hi @Noel,

    There is another way and involves decomposing your time series into its spectral components to see if there is a "dominant" cycle that is persistent. In financial time series these cycles are going to change frequency,phase and amplitude constantly but it can help you get a rough estimate of what your training period might be or at least provide a starting point. Sometimes it is useful to know when these cycles change significantly. Think about it as sine waves compressing and expanding.There are few ways to do this and there is a lot of information on the net about cycles in financial time series and how to extract them.

    Only testing will tell you if this step is useful as it is instrument dependent and most of the time it will tell you that there really is a monthly, quarterly and annual cycle. Some are so persistent that they have a name.....the four year "Presidents Cycle". Very short term cycles are not that useful as they vanish and change frequency too quickly. Overall, I don't think that this is a bad way to start and certainly better than picking a number randomly.

    Once you know your cycle periods you can isolate them with bandpass filters and as @Telcontar120 mentioned, these can become features. Your training period would need to be long enough to capture the cyclical structure of your time series.



    regards,

    Alex


  • NoelNoel Member Posts: 82 Maven
    Thanks again, Alex. I'll give this some thought. I appreciate your reply
  • varunm1varunm1 Member Posts: 1,207 Unicorn
    edited October 2019
    Hello @Noel

    I am not sure if this fit your requirement. Why don't you give a shot with the LSTM networks in the deep learning extensions and see how it goes. These networks might capture the relations in your time series data. You can try models to see which works best.

    I am sure Alex @hughesfleming68 has done lot of work in financial time series with RNN's 

    Decomposing time series signals to relevant spectral components is also a way to do it. One another method of decomposition if your signal is nonlinear and nonstationary is to use Hilbert Huang transform and transform it to time-frequency components and use these components for training. But this needs Matlab or other software. 


    Regards,
    Varun
    https://www.varunmandalapu.com/

    Be Safe. Follow precautions and Maintain Social Distancing

Sign In or Register to comment.