The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Best way to spot examples in testing set that receive a wrong classification?
Hello! I have a dataset of 486 examples, 53 attributes including a binominal target attribute (0, 1). I use 80% for training and 20% for testing. In the X-validation operator, the training part contains the Decision Tree operator inside of the Bayesian Boosting operator; the testing part contains the Apply Model operator connected to the Performance operator.
With Decision Tree alone, I have about 64% correct prediction for the testing set. With Bayesian boosting, I have about 79% correct prediction. In the result section, I can see a green-colored column indicative of the prediction for the target attribute for all 486 examples.
My question are:
1. Is there a reason that the predictions shown are for all examples, rather than for the testing examples only?
2. what's the best way to spot and isolate the examples that are incorrectly predicted?
Many thanks!
With Decision Tree alone, I have about 64% correct prediction for the testing set. With Bayesian boosting, I have about 79% correct prediction. In the result section, I can see a green-colored column indicative of the prediction for the target attribute for all 486 examples.
My question are:
1. Is there a reason that the predictions shown are for all examples, rather than for the testing examples only?
2. what's the best way to spot and isolate the examples that are incorrectly predicted?
Many thanks!
Tagged:
0
Answers
may replace X-Validation by X-Prediction (without using performance
operator) then you get "realistic" predictions in the result
perspective. There you may choose "wrong_predictions" to "spot and
isolate the examples that are incorrectly predicted".