The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
[SOLVED] Creating ensemble methods (bagging)
Hi, I am trying to create a hybrid classificator using two different classification algorithms. I want to use bagging, so idea would be to split dataset into multiple datasets, classifie with one kin of classifiers and classify results with another classifier.
I have dataset. I use bagging control in rapidMiner and K-NN inside it. I manage to get classification results from each of k-nn. How can i collect those results and give them as a dataset to the next classifier (probably random forest) ?
I have dataset. I use bagging control in rapidMiner and K-NN inside it. I manage to get classification results from each of k-nn. How can i collect those results and give them as a dataset to the next classifier (probably random forest) ?
0
Answers
if I understand you correctly, what you want to do is commonly referred to as "stacking". We have a corresponding Stacking operator in RapidMiner. Please have a look at its documentation to see if it fits your needs.
Best regards,
Marius
First of all, i need to test this hybrid classificator using 10, 100, 200 K-NN in the first level and i don't know how to do it. I think i need to use split and bagging operators.
Secondly i need to use svm in the second level, but i get error "SVM does not have sufficient capabilities for the given data: binomial attributes not supported".
The only problem is, that with SVM it does not work. I get error "The operator SVM does not have sufficient capabilities for the given adta set: binomial attributes not supported".
If i change SVM with K-NN it works perfectly. But i need SVM to be there.
**********************************************************************************
EDIT
I found a workaround to my problem. I changed K-NN and SVM places and it works ok. Of course, for future works, it would be nice to know how to solve this problem.
This may be an old thread but in case anyone else stumbles upon it like I did, I thought I should answer what I can.
The k-NN model creates a new binomial prediction variable that is appended to your dataset. SVM cannot handle binominal attributes and that is why it throws that error. The algorithms placed in the Base Learner window (left side) of the Stacking operator will always create this new binomial attribute so the algorithm in the Stacking Model Learner window (right) has to have binomial attribute capabilities.
Hi KellyM: Thanks so much for your post!.
Best wishes, Michael MArtin