The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Is there any method to check if a model has overfit?

CuriousCurious Member Posts: 12 Learner I
Is there any process/practical method I can run to check it? 

Answers

  • hughesfleming68hughesfleming68 Member Posts: 323 Unicorn
    edited February 2019
    Forward testing is your only option but this also goes for models that generalize well. With regression problems you will often find that the learner settles on the direction of the last value being being the best predictor of the next value. In this case, the value of the prediction is questionable and is most likely caused by over fitting. You also have to understand your data. If your data is noisy with little serial correlation and closely resembles a random walk and at the same time your testing is giving unusually good results then it is safe assume that you have a problem. Too good to be true also applies to machine learning.

    Nevertheless, this is a good question that everyone will have to deal with at some point. The best procedure is to establish a baseline. 1 - Build your forecast using a default model. 2 - Determine which learner is most suitable for the data and 3- Forward test on unseen data. Not spending enough time on 1 and 3 is where most people go wrong.
  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
    Hi,
    my main question is: Do i care? Overfitting means, that my training-performance is better than my testing performance. If my correctly validated test performance is good, i am usually fine.
    BR,
    Martin
    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • kypexinkypexin RapidMiner Certified Analyst, Member Posts: 291 Unicorn
    Hi @Curious

    To add up to previous answers: use common sense :)

    If on a test set you got an error of 0.001 or AUC = 99.95, then something is certainly wrong. Any 'too good to be true' result may generally indicate overfitting. Also, use correlation matrix to see if some attributes correlate too high with the label. 
Sign In or Register to comment.