The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"Raise again. What's the algorithm for optimistic AUC"
Dear everyone,
I am near to crazy now about this question. How does optimistic AUC was calculated? why sometimes the optimistic AUC has a huge difference than pressive one? Which one should we use? AUC is a very important parameter to evaluate a model. but the unclear results made it almost impossible. Thanks
I am near to crazy now about this question. How does optimistic AUC was calculated? why sometimes the optimistic AUC has a huge difference than pressive one? Which one should we use? AUC is a very important parameter to evaluate a model. but the unclear results made it almost impossible. Thanks
Tagged:
0
Answers
for calculating the AUC criterion one has to measure the area under the (ROC) curve. This curve is based upon the sorted confidence values of the classifications outcome. You will start with the highest confidence and if it's predicted wrongly, you make a step to the right. If it's correctly predicted you make a step upwards.
This is common for all AUC measures, but they differ in the way how they deal with examples that has been classified with the same confidence. The optimistic will first take the correctly predicted into account, the pessimistic first the wrongly predicted. The first approach will yield a curve that goes up and then to the right, while the second results in going right first and then upwards. But this causes a difference in the area under the curve!
The third way in the middle would be to draw a diagonal.
The visual curve can be either seen in the performance vector's renderer view, or if you use the operator "compare ROCs".
Greetings,
 Sebastian