The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Performance (cost) sample not behaving as expected
I was looking into the performance (cost) component. It comes with a tutorial. The tutorial applies naive bayes to the Golf dataset using split validation. The outcome should be that 1 of 4 items is misclassified. However, if i run it, all items are misclasified as follows (play -> prediction):
yes -> no, no-> yes, yes -> no, yes -> no).
My collegue did not have this result. I am running this on an AMD Ryzen 5 3600 with RapidMiner 9.8.001.
I did not change any of the paramters in the tutorial.
I Also rebuild the model from scratch which had the same results.
yes -> no, no-> yes, yes -> no, yes -> no).
My collegue did not have this result. I am running this on an AMD Ryzen 5 3600 with RapidMiner 9.8.001.
I did not change any of the paramters in the tutorial.
I Also rebuild the model from scratch which had the same results.
Tagged:
0
Best Answer
-
lionelderkrikor RapidMiner Certified Analyst, Member Posts: 1,195 UnicornHi @MaartenK,
I'm only able to reproduce what you observe if :
- I check use local random seed and local random seed = 1992 in the parameters of Split Validation operator
Otherwise if use local random seed is unchecked, i have in deed 25 % of the sample misclassified like your colleague ….
Thus are your sure that you have not checked use local random seed in the parameters of Split Validation operator.
Regards,
Lionel0
Answers
That must be it. 1992 was the default local random seed in previous versions of RM and i set at as default to be able to reproduce the results from my thesis. The current default is 2001. If I use that then it works as described in the help.
Still interesting how that would generate such different results (4 classification errors vs one). Probably due to the fact that the golf dataset is very small.
Thanks for the fast response!