The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
All supervised models should, if possible, return attribute weights
yzan
Member Posts: 66 Unicorn
All supervised operators should, if meaningful, return attribute weights representing the feature importance. If nothing else a decision tree and perceptron could get it.
Tagged:
0
Comments
hello @yzan - can you please give us an example to replicate?
Scott
An example of a supervised operator, which returns attribute weights, is "Generalized Linear Model".
The calculation of weights for a decision tree:
For a perceptron, the returned attribute weights could correspond to the weights of the perceptron (they are already visible in the "model" output, but they are not immediattely passable to operators like "Select by Weights").
thanks for that, @yzan. Just heard back from dev team that this is coming soon.
Scott
Possibly even "Deep Learning" could return attribute weights as the backend H2O implementation provides this information and other algorithms from H2O, like GLM and GBT, already output attribute weights.
Update: As of version 8.0 Decision Tree and Random Forest now provide a new port that outputs feature weights.
https://docs.rapidminer.com/la
test/studio/releases/changes-8.0.0.html?_ga=2.83072976.793993492.1515416834-774805979.1445867999
?