The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
error range of classifier
how can I get out the box plot of errors of my classifier in the cross validation test? for example linear regression predict my label, and I also have the real number. I want to see the ranges of errors between different classifiers to choose the one that has the smallest range.
0
Best Answer
-
Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 UnicornI think you are misunderstanding the output from the cross-validation performance, which I just commented on in the other thread, so I won't repeat it here. Your model from cross-validation is the model on the entire dataset. With cross-validation you are not choosing between different models at any point. The performance estimate and its variance is simply a tool to help you understand how that model might perform on unseen data in the future.
6
Answers
I'm not sure to understand but I will nevertheless try to provide some answer elements :
I think that the "Auto-Model" feature of RapidMiner can help you.
After submitting your dataset to the "Auto-Model" tool, and after being guided, you have the results screen
where you can compare the performance metrics (after selecting one of them) of your classifiers (you have to click first on Comparaison -> Overview)
The results are presented like that :
Then,you can exploit the plot by right clicking on it and for example choose "save as" etc.
You can also click on ROC Comparaison to compare the ROC curve of your different classifiers.
I hope it helps,
Regards,
Lionel
Warning : You spoke of "Cross Validation" in your post : In Auto-Model feature, the performance metrics of the classifiers
are associated to a Split Validation (with a ratio by default of 0.8 / 0.2).
first I could not find results that you show on the picture, after running the designed model in the result section there are multiple tabs based on what you want to extract from model such as test, performance,etc. could you show me step by step.
secondly my question was different. To elaborate more, after calculating error (formula between predicted value and base value), then an important question is how the range of error is? there are multiple ways to do it, the very basic formula is to just : Max Error-Min Error, however, it is not a good formula since it is very affected by noises. Hence, some people suggest to use 90% confidence interval,(by assigning alpha=0.9), which means with the confidence of 90% the range is between these two numbers which is calculated by formula, what is the formula here ?
I understood that you want to compare the performances of differents classifiers. To basically performs this task
based on a "Split Validation" of the models, "Auto-Model" is an adapted tool.
First you have to click on the "Auto Model" button at the top of the screen, submit your dataset and at the end
you have the "results" screen I showed in my previous post.
However I don't know how to calculate the 90 % confidence interval in RapidMiner.
I hope it helps,
Regards,
Lionel