The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Which Validation operator should be used for model evaluation?

glybenttaglybentta Member Posts: 6 Contributor I
edited November 2018 in Help

The Accuracy given by the Performance Vector for Split validation and Cross validation is different. Where Cross validation shows slight improvement in accuracy. Which validation operator is preferred the most in case of model evalation?

Tagged:

Best Answer

  • Thomas_OttThomas_Ott RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,761 Unicorn
    Solution Accepted

    There are big differences on how Split and Cross Validation operator but the intent is the same, train, test, and measure performance of a model.  The Cross Validation operator gives a more honest estimation on how the model would perform on unseen data sets. This is why in the accuracy measure for a CV model you might see 70.00% (+/- 5%). The +/- 5% is essentially one standard deviation of the average 70% accuracy . 

     

    Go check out Ingo's paper on model validation to learn more: https://rapidminer.com/resource/correct-model-validation/

Answers

  • khannadhkhannadh Member Posts: 10 Contributor II

    Hi Thomas,

     

    I read the article and made a simple process using the Iris data to solution the Parameter Optimization biase. 

    Just wanted to check if I've done the nesting for the two validations correctly. 

     

    Could you please let me know?

     

    Thank You,

    Dhruve

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @khannadh - I saw your note but mainly wanted to tag @Thomas_Ott so that it gets his attention :)

     

    So your question is a good one. Your setup was almost correct except that you need to specify the name map parameters in the Set Parameters operator:

     

    Screen Shot 2018-03-16 at 11.19.55 AM.png

     

    Scott

     

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    I would be very curious what others think on this very important issue, as setups have varied over the years. @Telcontar120@mschmitz@yyhuang@Pavithra_Rao?


    Scott

     

  • khannadhkhannadh Member Posts: 10 Contributor II

    Thanks Scott.

    I appreciate the help.

     

    The set parameter is the only step that I dont understand.

    What is the operator doing exactly in that step?

  • khannadhkhannadh Member Posts: 10 Contributor II

    @sgenzer

    Also, when I set the parameter according to your screenshot, I still get a warning sign, not sure if the problem is fixed. 

    I've attached a screenshot.

     

    Do you know why this is happening?

     

  • Thomas_OttThomas_Ott RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,761 Unicorn

    @khannadh at run time with larger datasets, this setup could become slow. I would just put the Cross Validation operator in the Optimize Parameters operator instead of the other way around. This way 10 folds will be come one paramater optimization iteration. 

  • khannadhkhannadh Member Posts: 10 Contributor II

    I made some port connections and it seems to have removed the problem.

    But I'd still like to understand what exactly is going on?

     

    I have attached the screenshot and process.

    If someone could explain, that would be great.

     

    Thank You,

    Dhruve

     

     

  • Telcontar120Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn

    I tend to agree with @Thomas_Ott here.  While I understand the theoretical arguments (at least on some level) in favor of the double-nesting (corss-validation inside optimize paramaters inside cross validation), I don't find that in practice there is a significant difference or advantage to this solution.  But as Tom says it can lead to significantly longer run times with larger data sets.  I'll also point out that the double-nesting approach is not used in RapidMiner's auto-model processes either.

     

    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    ok thanks @Thomas_Ott @Telcontar120 that was my feeling as well but I appreciate the confirmation. So @khannadh, just to be crystal clear, the approach that is shown in that whitepaper is the "gold standard" but rarely used in practice due to issues pointed out above. 

     

    Now to answer your questions...

     

    - The "Set Parameters" literally takes the input parameters on the left (the gray "par" nub) and pushes them into the parameters for another operator in your process by its name. In your process, the name of the operator to which you want to push those parameters is called "Decision Tree (2)", and your Set Parameters operator is, in your process, named "Set Parameters". Hence, in the name map, I put "Set Parameters" in the left side (under "set operator name") and "Decision Tree (2)" on the right side (under "operator name"). That's what that operator does.

     

    - Now as @Telcontar120 and @Thomas_Ott implied, none of us really do this. To be honest, that's the first time I have used "Set Parameters" in a very long time (and I'm on RapidMiner every day). The more "normal" and much simpler way to do this (and the way that I think we all do this), is simply putting Cross-Validation inside "Optimize Parameter (Grid)". Done.

     

    The only other thing that many of us do, to make sure the performance is a true measure, is do an initial split of the data to ensure that you are measuring performance against an unseen "testing" set. Like this:

     

    new-01.png

    I usually do a 70/30 split but this often depends on who's doing it, and what the data set is like.

     

    Good luck!


    Scott

     

     

     

Sign In or Register to comment.