The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Will the same set of variables perform best for all classifying methods?

theWaytheWay Member Posts: 4 Contributor I
Suppose that out of a large set, S, of attributes that describe the target variable M, there exists a subset Z of S that optimize the performance for say a decision tree model, does that mean that Z will optimize performance for all other techniques, such as K-NN , or Bayes, or SVM?

Answers

  • haddockhaddock Member Posts: 849 Maven
    Hi there,

    The short answer is no.

    Here's a longer one. Classifiers have different attribute footprints; some accept numbers, while others can only do binominal classification, and so on.  The common ground is rather small, and for good reason. From a toolkit point of view you need only one tool to do a job; functionality overlap gains little. That means that your attribute subset Z for decision trees might not even work on other operators, for example a polynominal label might go brill on ID3, but not directly into SVM validation.

    Best wishes

    H
Sign In or Register to comment.