The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Leave One Out results in AUC of 0.5

erik_van_ingenerik_van_ingen Member Posts: 8 Learner I
edited June 2019 in Help
My target label is binominal, number of examples is 553. Running supervised classification deep learning with cross validation:
  • 10-fold  results in AUC = 0.846 and Accuracy = 76%
  • Leave 3 Out (180 fold) results in AUC = 0.5 and Accuracy = 42%
How should I interpret this? Which config should I trust? 






Tagged:

Best Answers

Answers

  • erik_van_ingenerik_van_ingen Member Posts: 8 Learner I
    Classes are imbalanced 467/84 for No/Yes. I am using stratified sampling. 

    Furthermore, I used the generate weight operation to anticipate on the class imbalance. I tested this both outside the cross validation as within the cross validation. 

    I tested other ML operators as well as Naive Bayes, Gradient Boost and so forth. DL performed usually the best. 

    Yes, I am aware that the sample size is relatively low. What ML operator would best fit, given the sample size?







  • asem_kasem_k Member Posts: 2 Contributor I
    Hi there..
    I still don't quite get it why AUC is close to chance whenever Leave-One-Out cross-validation is used.
    I can see why accuracy measure has a high standard-deviation (each fold, you are either getting 100% correct prediction, or 0 percent correct prediction), but how is that also affecting AUC? Is it because how AUC is actually calculated (can you elaborate on this)?
    By the way, class imbalance, modling techniques, and data size, seem not to have effect on this (try it with many variations of above in rapidminer) and same thing is observed about AUC.
Sign In or Register to comment.