The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Confidence or Prediction Intervals
Dear All,
When reporting a performance metric (ie AUC) of a model that was trained on a single data set and tested on a hold-out set, what is the proper way to assess its variance? Calculating the confidence intervals of the AUC or the prediction intervals?
Many thanks
Nikos
When reporting a performance metric (ie AUC) of a model that was trained on a single data set and tested on a hold-out set, what is the proper way to assess its variance? Calculating the confidence intervals of the AUC or the prediction intervals?
Many thanks
Nikos
Tagged:
0
Answers
This is a bit tricky as they both are centered around the same value, but a prediction interval is wider than the confidence interval. So in case of error reduction or a better accurate prediction you can go with prediction intervals.
Varun
https://www.varunmandalapu.com/
Be Safe. Follow precautions and Maintain Social Distancing
I am checking that, and I see no option in RM for that. I think it is a bit complicated to calculate.
@mschmitz or @IngoRM any comments on this.
Thanks
Varun
https://www.varunmandalapu.com/
Be Safe. Follow precautions and Maintain Social Distancing