The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

What is Training Accuracy / Testing Accuracy ?

Fred12Fred12 Member Posts: 344 Unicorn
edited April 2020 in Help

hi,

we had a discussion about what training accuracy is and what testing acc. is, in my opinion, there is no "training accuracy" (at least, I don't know what it is) because you measure performance always on testing data...

maybe you can use "validation data" in X-Validation for performance... however, I don't undersand what training accuracy means ?! I usually do X-Validation or Split-Validation.. or is training accuracy possible only with certain learners? at least I never encountered "training accuracy" in any performance operators...

 

Tagged:

Best Answers

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
    Solution Accepted

    training accuracy is usually the accuracy you get if you apply the model on the training data, while testing accuracy is the accuracy for the testing data.

     

    It's sometimes useful to compare these to identify overtraining.

     

    ~Martin

    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • IngoRMIngoRM Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder
    Solution Accepted

    In general you are right and you should ignore training error completely.  It is not telling you really anything useful.  For example, a K-NN learner with k=1 will always deliver 100% accuracy on the training data set.  But this does not mean that it can classify ANYTHING correctly on any non-training data point.  I don't get why people are still somewhat obsessed with reporting training error but whatever :smileytongue:

     

    Martin's point is still valid though: If you optimize your model with parameter optimization, feature selection etc. it sometimes can be useful to observe both training and testing error (although I personally still only focus on testing errors) to get some gut feeling about the robustness of the model.  If the difference between both starts to get large quickly, you probably are too far in "overfitting land" already.

     

    Cheers,

    Ingo

Answers

  • Fred12Fred12 Member Posts: 344 Unicorn

    @mschmitz why does one differentiate between testing and training accuracy ? or why do I need training accuracy at all if its not representative for test performance...

    is it to see the bias of your model? e.g to see if it is overfitting your testing data or not?

Sign In or Register to comment.