The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
One-class SVM performance problem
Dears,
I have been playing with rapidminer one-class LibSVM but I couldn't get any negative prediction result, only 100% confidence_TRUE at any parameters of SVM.
Does somebody know how to get correct result for one-class SVM in RM?
I will appreciate your response.
Kindly Regards,
Danny Seo.
I have been playing with rapidminer one-class LibSVM but I couldn't get any negative prediction result, only 100% confidence_TRUE at any parameters of SVM.
Does somebody know how to get correct result for one-class SVM in RM?
I will appreciate your response.
Kindly Regards,
Danny Seo.
Tagged:
0
Answers
it is quite irritating that you get 100% confidence for the class with every parameter setting. I was able to get more reasonable results quite easily using generated data. So maybe there is something wrong in your process setup or your parameters. Here is the RM5 code for the process I just set up. Maybe you are able to use this as a guide ... Kind regards,
Tobias
Thank you for your response.
I have tested your code as folloing :
(I just added some test data generation.)
However, it always results "true" predictions even if test data is generated between 100 and 200 bounds.
How can I classify out liers?
(It's possible with the consideration of confidence(true) attrigbute?)
Thanks.
Kindly Regards,
Danny.
Kind regards,
Tobias
For example, one-class SVM can be used to train an outlier model using 2 classes of labeled data. Although model training does not use the labels when generating a model it should be able to differentiate (predict) between the inside and outside of the one-class model. Therefore RM should be able to take a binomial class label and perform prediction for 2 classes of labels.
-Gagi
In the log i can find lots of the following entries when using one-class in RM 5.0.3:
I remember the "NaN" values from java-libsvm where it indicates the classification result of an outlier, so its definitly processed within RM. Would it be very difficult to add some kind of binominal prediction-functionality where model result "NaN" is mapped to a predication label like "out" and result "1" is mapped to a prediction "in"? I can offer to contribute some code in this case if you give me a hint in which RM-class these changes are required and if its not too time consuming .
Greetings,
Harald
you might write a feature request on our bug tracker, but our schedule is quite full. So if you need it really fast, you very well could contribute the code. I would start search in the LibSVMModel class in com/rapidminer/operator/learner/functions/kernel package.
Greetings,
Sebastian
I think I will give it a try to implement it by myself! My dev-environment is already up and running and I've located the proper part in the code (thanks for the tip). When using the one class classification mode, the results are either "-1" or "1" as expected but the probabilities aren't calculated by this function. So I think about implementing an optional parameter for the libsvmtype one-class to switch between the current and a new classification behavior to maintain downwards compatibility. Currently I still need a little bit more understanding on how the datastructs (esp. Attribute and Example) work together. Also the NaN log message is not directly generated by libsvm, I'm sure it has something to do with the label attribute and I'll research this too.
If its working and it's not too dirty I'd gladly contribute the code.
Greetings, Harald
UPDATE: patch available in http://rapid-i.com/rapidforum/index.php/topic,1746.0.html