The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

LibSVM and normalization

UsernameUsername Member Posts: 39 Maven
edited November 2018 in Help
Hi,

does the LibSVM learner normalize the example values internally or must I apply the Normalizaion operator?

Thanks

Answers

  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    I don't think that the LibSVM will normalize the values internally. As far as I know, there are no reasons to do that. But if you want to go for sure: Run the LibSVM one time without previous normalization and one time with it. By comparing the model you should be able to find differences if there are one.
    If you try, I would appreciate if you could report the results here, since I'm a little bit curious, too :)


    Greetings,
      Sebastian
  • UsernameUsername Member Posts: 39 Maven
    My program isn't finished yet, so I can't tell you any results soon but I found this section in the LibSVM tutorial:
    Scaling them before applying SVM is very important. (Sarle 1997, Part 2 of Neural
    Networks FAQ) explains why we scale data while using Neural Networks, and most
    of considerations also apply to SVM.
    The main advantage is to avoid attributes in greater numeric ranges dominate
    those in smaller numeric ranges. Another advantage is to avoid numerical diffculties
    during the calculation. Because kernel values usually depend on the inner products of
    feature vectors, e.g. the linear kernel and the polynomial kernel, large attribute values
    might cause numerical problems. We recommend linearly scaling each attribute to
    the range [-1; +1] or [0; 1].
    Of course we have to use the same method to scale testing data before testing. For
    example, suppose that we scaled the rst attribute of training data from [-10; +10]
    to [-1; +1]. If the rst attribute of testing data is lying in the range [-11; +8], we
    must scale the testing data to [-1:1; +0:8].
    http://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf

    So, I will use normalization before applying the LibSVM learner.  :)

    Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?
  • TobiasMalbrechtTobiasMalbrecht Moderator, Employee-RapidMiner, Member Posts: 295 RM Product Management
    Hi,
    Username wrote:

    Is there something like a "NormalizationModel" to get the same normalized values for trainig and test examples?
    This should be possible by applying the operator [tt]Normalization[/tt] with enabling the parameter [tt]return_preprocessing_model[/tt] on the training data. This means, a preprocessing model is generated by normalizing the training which you can subsequently apply on the test data as well. The normalization is then done on the test data via the same transformation as applied on the training data. Therefore you have to apply this model using the [tt]ModelApplier[/tt].

    Regards,
    Tobias
  • radgomu6radgomu6 Member Posts: 2 Contributor I
    You told about using Normalization operator. What is that? I'm a beginner. Is that available in Matlab? If so, how to access it?
    Tobias Malbrecht wrote:

    Hi,

    This should be possible by applying the operator [tt]Normalization[/tt] with enabling the parameter [tt]return_preprocessing_model[/tt] on the training data. This means, a preprocessing model is generated by normalizing the training which you can subsequently apply on the test data as well. The normalization is then done on the test data via the same transformation as applied on the training data. Therefore you have to apply this model using the [tt]ModelApplier[/tt].

    Regards,
    Tobias
Sign In or Register to comment.