The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
implement this algorithm in rapidminer
Hi, I want to implement an algorithm in the RapidMiner like this, but I do not know how? please guide me
Tagged:
0
Answers
Based on your graph you will need a Read opeator to load in your data, a Set Role operator to set your label, then a Sample operator, a Cross Validation(CV) operator, and a Stacking operator on the training side of CV operator. You embed the different machine learners in the Stacking operator.
Hi, Thank you for your reply.
For the sampler operator, should I use the bootstrap operator or bagging?
This error occurred for the operation I used. What is this error?What should I do?
Well that depends on what you want to do with sampling as you balance your classes. Is it better to bootstrap (aka unsample) or downsample? Have you considered weighting them using a Generate Weight (stratification)?
Your other error means that you can't deliver and example set (EXA) from that operator, rather you need an operator that delivers a model (MOD). Something like a Naive Bayes or Decision Tree, etc
I want to use an optimal model to achieve higher ranking accuracy in unbalanced data in an ensemble algorithm by combining two ensemble bagging and boosting and using a genetic programming model as a learning algorithm for classifying unbalanced data.If I just to use bagging for sampling and give data for training in boosting. It makes a better model by weight.
I want to use genetic programming to improve this model.How do you think I can make this model? Is this idea feasible?
Yup, you can do that in RapidMiner. Post your process when you're ready and we can troubleshoot.
Thank you,
Post my process here or email you?
Please post it to the thread, thanks.
Hi, Mr. Ott.
Is my processing correct?
Do you think this complies with the model I explained?
Is sampling done in the same way?
How can the minority class (positive) specifically weigh more to see more in the prediction?
See I'm guess that the positive class is the minority class. I would handle it by overweighting the minority class and underweigthing the majority class. Something like this.
Then i would use a Cross Validation (not Split Validation) in the Optimize Weights.
I used cross vallidation, but why is the number of error predictions in the confusion matrix not equal to the number of displayed errors of optimize weight and wrong prediction negative and positive? Or am I wrong?
Not compatible with the confusion matrix for visualization.How to be corrected?
How can I get the tree out of this output process?