The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

dataset for parameter optimization

makakmakak Member Posts: 13 Contributor II
edited November 2018 in Help
Hi all,

the ideal situation is to have 3 separate sets: for training, testing (parameter optimization) and validation. What if I train on 70%, optimize parameters on resting 30% and finally I evaluate whole dataset(100%) performance by 10-fold cross-validation. Is this correct or am I risking some overfitting this way?
And one more little question, maybe little out of point, but anyway, I have always exactly same micro and macro average from cross-validation. Is this ok, or it seems suspicious?

Thank you.
Sign In or Register to comment.