The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Deep Learning - Test results Confidence values
fstarsinic
Member Posts: 20 Contributor II
Predictions for test data come back with a prediction (0 or 1 in my case) and a confidence value (float) for both:
confidence(0), confidence(1).
To get the overall confidence, in the past, I would create a new attribute and give it abs(conf0 - conf1) as the overall confidence.
When I do this, I'm noticing very small numbers for the "1 prediction". the values are always below .27.
These seem very low given the predictions are coming back as expected.
The confidence for the 0 label can be very high.
The only thing i can think of is that the dataset is highly imbalanced and has far more 0 labels than 1.
Is this the reason the confidence values are coming back so low for the lesser class? More data would provide better confidence?
My ultimate goal was to "act" on all predictions above a certain confidence but this is perhaps showing me that i cannot use a single value for both predictions (0 and 1) and that i might need to use 2 different confidence values as "trusted".
I trust 0 above .8
I trust 1 above .25 <-- just seems very low to me even tho the results look good.
(or i artifically bump up the confidence of the lesser class so they seem more normal)
As it is, best case, i'd be trusting near a 60%/40% confidence combination which isn't that much better than flipping a coin, i.e., 50%/50%.
So I'm wondering how the confidence values are generated and how I should be interpreting them in terms of what minimum values can be "trusted" and would be considered "actionable".
Thanks.
confidence(0), confidence(1).
To get the overall confidence, in the past, I would create a new attribute and give it abs(conf0 - conf1) as the overall confidence.
When I do this, I'm noticing very small numbers for the "1 prediction". the values are always below .27.
These seem very low given the predictions are coming back as expected.
The confidence for the 0 label can be very high.
The only thing i can think of is that the dataset is highly imbalanced and has far more 0 labels than 1.
Is this the reason the confidence values are coming back so low for the lesser class? More data would provide better confidence?
My ultimate goal was to "act" on all predictions above a certain confidence but this is perhaps showing me that i cannot use a single value for both predictions (0 and 1) and that i might need to use 2 different confidence values as "trusted".
I trust 0 above .8
I trust 1 above .25 <-- just seems very low to me even tho the results look good.
(or i artifically bump up the confidence of the lesser class so they seem more normal)
As it is, best case, i'd be trusting near a 60%/40% confidence combination which isn't that much better than flipping a coin, i.e., 50%/50%.
So I'm wondering how the confidence values are generated and how I should be interpreting them in terms of what minimum values can be "trusted" and would be considered "actionable".
Thanks.
0
Best Answer
-
lionelderkrikor RapidMiner Certified Analyst, Member Posts: 1,195 UnicornHi @fstarsinic,
It seems that this operator is used in the training part of the process.
To see how/where this operator is used, run an Auto-Model classification process (for example with the "Titanic" dataset).
When the results are displayed in the final screen (the "results" screen)., click on "OPEN PROCESS" and you will see the process.
Then go to Train Model --> Optimize ? --> you will see inside this subprocess operator the Rescale Confidences operator just after the modelling :
Hope this helps,
Regards,
Lionel6
Answers
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
Regards,
Lionel