The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Why do the relative errors change?
s_sorrenti3
Member Posts: 4 Learner III
Hello,
By executing the neural net with cross validation and the linear regression with cross validation together in the same process I get the following “relative errors”:
By executing only the neural net with cross validation separately in a single process I get the following relative error:
Why do the relative errors change if I run learning models together or if I run them separately?
Tagged:
0
Answers
Hi,
This surely is caused by a different random seed in both cases. If you want to avoid said behaviour, you have to tick the "use local random seed" option in the Cross-Validation operator.
However, the differences caused by changing the seed should be minimal if your process is correct. In your case it looks as if some neural nets models are not converging, therefore you have very dispair results in each fold of the cross-validation. I think you have to tune up your models and their optimization options.
Regards,
Sebastian
I have selected the "use local random seed" option in the Cross-Validation operator.
By executing the learning models with cross validation in the same process I get the following relative error: 73,34%.
By executing only the neural net with cross validation separately in a single process I get the following relative error: 203,41%.
@s_sorrenti3 type in a seed like '1992' for each cross validation and try again. If that doesn't work, follow the rules of the Community by posting your XML and data.