The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
How important is the confidence interval? Should I worry if the confidence is low?
Hi,
I have made two final models, both are performing equally good when cross validating. When it comes to deployment in a business setting, one of them shows very low confidence in its prediction (the most confident prediction is around .6 and even when its confidence is 0.25 or more, it decides to predict it as true.) How come? When it is predicting on the training and test set, it has much higher confidence in its predictions.
My other model is equally good in terms of cross validation, but when deploying it, it has much higher confidence in its prediction (most ranging from .6 to .9
Should I ignore this or can I conclude based on this, that the latter model is better to use?
I have made two final models, both are performing equally good when cross validating. When it comes to deployment in a business setting, one of them shows very low confidence in its prediction (the most confident prediction is around .6 and even when its confidence is 0.25 or more, it decides to predict it as true.) How come? When it is predicting on the training and test set, it has much higher confidence in its predictions.
My other model is equally good in terms of cross validation, but when deploying it, it has much higher confidence in its prediction (most ranging from .6 to .9
Should I ignore this or can I conclude based on this, that the latter model is better to use?
Tagged:
0
Answers
Ingo