The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Random Forest Accuracy


Why the accuracy of random forest (majority voting) different between each experiment Although I use the same parameters in each experiment (Knn imputing, Binning, Forward Selection)
first experiment: 91.97
second experiment : 92.03
third experiment : 91.85
first experiment: 91.97
second experiment : 92.03
third experiment : 91.85
Tagged:
0
Best Answer
-
hughesfleming68 Member Posts: 323
Unicorn
Most learning operators use randomness in some form. You can get repeatable results by ticking the local random seed box. Is this a good idea? There are pros and cons. A valid alternative is to take the average of multiple runs but in many cases setting the random seed is the best way to go especially if other people need to reproduce your results.5
Answers