The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
polynomial classfication by using SVM
I kept getting errors of using SVM to do polynomial classification? I am quite new to data analytics. Any help would be appreicated.
Tagged:
0
Answers
Hi and welcome to our community,
The message means that you try to make a prediction for a label (or "target") with more than two categorical values (which is called "polynominal" in RapidMiner). And the SVM you are using is not supporting this type of data. Try the operator "SVM (LibSVM)" instead which can handle this.
You can check what types of data is supported by a machine learning model if you right click on the operator and select "Operator Info". You will see a table describing the supported data types.
Another useful resource is the following web page: http://mod.rapidminer.com
Here you can make settings describing your data and it will show you the model types which can be used on that data.
Hope that helps,
Ingo
Ingo, thank you very much! This is very helpful. Following your instruction, there seems no logistic regression for polynominal labels. Did I miss anything to use logistic regression for multi classification?
Hi,
This is correct. Logistic regression can only do binominal classification (i.e. for two classes only). BUT you can always embed any binominal learner into the ensemble operator "Polynominal by Binominal Classification" which turns the polynominal classification problem into a set of binominal classification problems following either a 1-vs-1 or a 1-vs-all strategy.
Below is a process which shows you how to do that.
Hope this helps,
Ingo
Thanks, Ingo. This is very helpful. I got better understanding after reading the rapidminer-studio-operator-reference document. Thanks again!
Ingo,
A similar question came up with Sample(Bootstrapping). There seems no way I can define different multipliers to different classes. For example, class1 has 10 data points and class 2 has 5 data points. I want to duplicate the class 2 data points and make the total number to be 10, which is the same as class1. I cannot use Sample(Bootstrapping). I don't want to down sample by just using Sample operating with ratio parameter because the number of data is already very small, i.e. 10. I need to fully use all the data. Is there any other operator available? Or I can manually duplicate class 2. Thanks!
@Shagu you should consider using the "generate weight" operator instead, which will generate weights to balance the classes, and does not discard any data. It is roughly equivalent to duplicating under-represented examples but not as messy. You just have to check that whatever learning algorithm you are using is able to handle weighted examples. Unfortunately the native RapidMiner logistic regression operator does not, but the very similar logistic regression operator from the Weka extension does. (You can check this by pressing F1 when selecting any learning operator in your process).
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
Thank you, Telcontar120. Since I am working in an engineering field, where data are rather limited than financial and insurance areas, due to the fact that every data is costly. I feel Naive Bayesian is the best model, because it is simple and stable when the number of data is small. Is this just my intuition? Or is there any mathematical theory behind it. Thanks again!
I am not really a statistical theoretician, so I can't say for sure. My experience is that determining which learning algorithm works best is highly contextual based on the dataset you are working with. Regardless of the specific algorithm chosen, using standard model validation approaches such as cross-validation will be an important part of ensuring that your final model is robust. Also choosing a simpler final model will generally help it to be more robust over time.
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts