The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Evaluation of Classification models

Muhammed_Fatih_Muhammed_Fatih_ Member Posts: 93 Maven
edited November 2019 in Help
Hello together, 

is there a possibility within RapidMiner to parallelly train and test selected classification models and subsequently return the best performance result of the respective classification model? The goal of my research is to evaluate the behaviour of classification models with regard to the determination of the best performance. 

Thank you for your answers!

Best regards, 

Fatih

Best Answer

Answers

  • Muhammed_Fatih_Muhammed_Fatih_ Member Posts: 93 Maven
    Hi @sgenzer

    thank you for your answer which was very helpful :) Is there also the possibility to execute ROC Comparison with Cross-Validation? 


  • lionelderkrikorlionelderkrikor RapidMiner Certified Analyst, Member Posts: 1,195 Unicorn
    Hi @Muhammed_Fatih_,

    1. Inside AutoModel, by default, the model(s) is (are) not validated by a cross-validation, but by a multi - hold-out-set validation.
    See the documentation on the "results" screen : 


    You can also inspect the generated process(es) after executing AutoModel to inspect them and understand exactly how is (are) 
    calculated the performance of the model(s).

    2. If you want to obtain the comparaison of ROC curves with a cross-validation, you can use the Compare ROCs operator.
    Simply put the models you want to benchmark inside this operator.

    Hope this helps,

    Regards,

    Lionel

Sign In or Register to comment.