The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Attribute Question
Ghostrider
Member Posts: 60 Contributor II
in Help
I have two time series that I want to feed into a learning algorithm, probably start with a neural net or SVM. When I plot the time series, the gap or vertical space between the two lines is meaningful. However, should I make this an attribute? Or would the absolute position of each point be sufficient (the vertical space can be derived from absolute position)? Generally, how do I know when I should construct a synthetic / derived attribute?
0
Answers
yes, if the gap is meaning ful, you definitely should add a new attribute before transforming the series, e.g. by windowing. Things like this - for example also extracting additional describing features - usually help a lot since only a few data mining schemes get the importance of implicit features like (x_t - y_t) without adding them to the data. Other extracted features often help since the abstract from the actual absolute values.
How to know? Well, just try it and check if it helps to improve your prediction performance. In general, many modern data mining schemes are good in giving unnecessary features a low weight (or you could add an additional feature selection for supporting this) but they can hardly construct any implicit feature on their own.
Cheers,
Ingo
What data mining schemes get the importance of implicit features? Seems like those would be the only ones I'd be interested in using. Only drawback I am guessing would be over-fitting.
well, the most often used candidate for this probably would be genetic programming. In my very early data mining days I used them a lot but quickly found them not robust and stable enough. Genetic programming is very likely to overfit and you would have to embed some type of regularization in order to prevent this. I would recommend to use a feature generation approach (like the operator YAGGA2) with a robust inner learner instead and add some regularization, for example by taking the number and / or the complexity of the features into account.
If you want to read more about this, I would recommend my PhD thesis. About 300 pages funny stuff around these and related questions :-)
Cheers,
Ingo
I downloaded your thesis and it looks very good. I think you should share this someplace, maybe make a sticky thread as I think others could benefit from reading it...I'm only 10 pages into it...do you know of any other good machine learning resources for beginners?
Yes, I have been experimenting with genetic programming, using the ECJ project. I would actually expect GP to overfit a lot less if you prune / constrain size of GP tree. At least the over-fitting is controllable unlike a lot of other machine learning algorithms. Also, a big advantage of GP is that you can understand what was learned, it produces a readable parse tree. If I feed data into an ANN or SVM, I have no idea what it has actually learned.
thanks for your kind words. Well, I suppose my PhD would hardly count as good introduction ;D but we had a thread here in the forum a couple of months ago discussing recommendations for several books in the field, maybe those could help as a good starting point:
http://rapid-i.com/rapidforum/index.php/topic,1837.msg7910.html
That's true but this does not really help if you allow arbitrary functions at every place of the parse tree since in many cases you will end up with a shallow tree containing every function you can think of having them mixed together. And it does not help for getting stable results: change the data only a bit and you will often end up with completely different results.
Hmm, I don't think so. Almost every learning scheme proposed during the last 20 years have some built in regularization for controlling over-fitting. Of course it is a user parameter (which is an annoying fact which serves as one of the major motivations of my PhD) but nevertheless it can be controlled. Actually, the only popular learning scheme popping into my mind which does not really offer anything nice for this are neural networks - which probably is one of the major reasons (beside loooooong runtimes....) why I don't use them often.
I completely agree, that really is a strong point of Genetic Programming! At least for ANN you cannot really understand anything just from the model. Things are in my opinion different for SVM though...
Cheers,
Ingo