The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"there any reference to get more understanding the
my conclusion about this algorithm is :
The algorithm starts with using dataset instances around each instance that have to explain, then use the model to create a prediction for those neighbors. After that, the correlation calculated between each instance attributes and the predictions to get the local weight for each attribute, the importance value of each attribute for current prediction is determined based on this weight.
and after that, the weights aggregate for all attribute in the same prediction to get the global importance.
is that right??
thank you in advance
The algorithm starts with using dataset instances around each instance that have to explain, then use the model to create a prediction for those neighbors. After that, the correlation calculated between each instance attributes and the predictions to get the local weight for each attribute, the importance value of each attribute for current prediction is determined based on this weight.
and after that, the weights aggregate for all attribute in the same prediction to get the global importance.
is that right??
thank you in advance
Tagged:
0
Best Answers
-
IngoRM Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM FounderHi,AlmostThis page here describes the algorithm in more detail: https://docs.rapidminer.com/latest/studio/operators/scoring/explain_predictions.htmlThe key sentence is this here: "For each Example in an ExampleSet, this operator generates a neigboring set of data points, and uses correlation to identify the local attribute weights in that neighborhood."More details on your comments now:1. "The algorithm starts with using dataset instances around each instance that have to explain" - nope, we actually generate artificial data points around the point to be explained. See above.2. "...then use the model to create a prediction for those neighbors." - correct.3. "...the correlation calculated between each instance attributes and the predictions to get the local weight for each attribute" - correct.4. "and after that, the weights aggregate for all attribute in the same prediction to get the global importance." - nope, the global importance is independent of the local importances. In Auto Model we use again correlation for this but there are 30+ possibilities in RapidMiner (all the "Weight by..." operators).The algorithm is an improved variant of the well-known LIME algorithm. The difference is that our implementation can deal with all types of data (categorical, numerical) for both the inputs and the labels. We also calculate the weights in linear time which makes our algorithm much faster than the original LIME.Hope this helps,
Ingo2 -
earmijo Member Posts: 271 UnicornTake a look at the article by Ribeiro/Singh/Guestrin:
https://arxiv.org/pdf/1602.04938v1.pdf
A fabulous book in progress on the subject is available at:
https://christophm.github.io/interpretable-ml-book/
6 -
Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 UnicornThe key is that this is a local approach to interpretability. That makes it very useful because it works for any model type including many complex and non-linear forms. However, that means you need to be careful with making generalizations from its output.6