The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"Comparing multiple methods with same dataset in the Wrapper Feature Selection"
As part of a project of mine I am trying to compare 4 methods (NB,RandomForest,SVM, MLP). In the process it reaches a step where the data set is multiplied into 4 distinct wrappers (standard Optimize selection) in which the feature selection will take place. In the inner learner of the wrapper I am using the x-validation to obtain the performance of each learner and then obtain the feature set. However I want to be sure that datasets created by the cross-validation (in this case 10) are exactly the same for consistency sake between all the methods, how would I do that?
So summing up, how do I guarantee for each wrapper I have the same rows within folds each time.
So summing up, how do I guarantee for each wrapper I have the same rows within folds each time.
Tagged:
0
Answers