The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Is it logical that testing error be lower than training error?
Hi Rapidminer Community,
I used SVM (Libsvm) operator for making a regression model. After training by 10 fold cross validation the resulted correlation coefficient was 84 and RMSE was 0.048. By applying this model on the test data set i got correlation coefficient of 88.5 and RMSE of 0.037. Now i need to know is it possible or logical that testing error be lower than training error?
Thanks.
I used SVM (Libsvm) operator for making a regression model. After training by 10 fold cross validation the resulted correlation coefficient was 84 and RMSE was 0.048. By applying this model on the test data set i got correlation coefficient of 88.5 and RMSE of 0.037. Now i need to know is it possible or logical that testing error be lower than training error?
Thanks.
0
Answers
yes, this is possible. You have to keep in mind that using only one testset does
_not_ deliver representative results. That's why we have Cross-Validation where
averaging over more than one testset is done. So trust Cross-Validation for choosing the
right SVM-parameters and finally train your model on full data.
Cheers, Frank
Thank you.