The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Gradient booster versus neural net
hi,
I'm concluding an exercise around time series. So far I have explored different ways varying from naive models, STL decomposition, Holt-Winters, Arima which are time-series models. I would like to explore real machine learning models. I have seen a RapidMiner tutorial in relation to windowing which is applying a gradient booster.
1- How does this work precisely?
2- What is the difference with e.g. a neural net model?
3- How are such ML models different than the regular time-series models?
I copy @mschmitz as reference following our recent discussion.
Thank you beforehand,
Bart
I'm concluding an exercise around time series. So far I have explored different ways varying from naive models, STL decomposition, Holt-Winters, Arima which are time-series models. I would like to explore real machine learning models. I have seen a RapidMiner tutorial in relation to windowing which is applying a gradient booster.
1- How does this work precisely?
2- What is the difference with e.g. a neural net model?
3- How are such ML models different than the regular time-series models?
I copy @mschmitz as reference following our recent discussion.
Thank you beforehand,
Bart
0
Answers
- re the CNN approach is there an operator that I can look at and experiment with for this? what is it called?
- per the LSTM approach same question exactly.
- can these approaches be combined somehow and, if so, can you show a newbie like myself a simple diagram about how that might look?
- lastly, do we internally have the options to work with either RL or GRU approaches?
thank you & good morning!! - Richard