The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Classification evaluation of training and testingdata
Hello,
I train a naive bayes model by using a split validation (first training of naive bayes, then application of the model and evaluation by the performance operator). The result is one performance window with the results of the testing data.
But, as far as I know the software should return two performance windows. One for the results of the training and one for the testingdata. How can I see if my model is overfitting or compare the performance if I see the results of the testingdata without comparison to the training performance?
Thank you
I train a naive bayes model by using a split validation (first training of naive bayes, then application of the model and evaluation by the performance operator). The result is one performance window with the results of the testing data.
But, as far as I know the software should return two performance windows. One for the results of the training and one for the testingdata. How can I see if my model is overfitting or compare the performance if I see the results of the testingdata without comparison to the training performance?
Thank you
0
Answers
You can apply the model to the training set immediately after it is created, determine the performance and pass that to the outside using the "through" connections from inside the split validation operator.
I made a simple example.
regards
Andrew