The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Helping in Model Interpretability

MontseMontse Member Posts: 19 Maven
edited December 2018 in Product Feedback - Resolved

Hello,

 

Auto Model helps in interpreting the results. The simulator tool is very helpful to prove the response of the model an also helps in interpreting how works the model. But it's manual. And when the model complexity increases, model interpretability gets complicated too.

I have found something like LIME, as a toolbox for model analysis.

RapidMiner has something like this? I think it could be very helpful to add this capability into RapidMiner.

 

Best regards,

Montse

 

0
0 votes

Fixed and Released · Last Updated

Comments

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist

    Hi,

    Get Local Interpretation is the LIME algorithm.

     

    Best,

    Martin

    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • Telcontar120Telcontar120 RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 1,635 Unicorn

    That's part of the Operator Toolbox, in case you don't have it yet.  It is a very useful extension!

    Brian T.
    Lindon Ventures 
    Data Science Consulting from Certified RapidMiner Experts
  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager
  • IngoRMIngoRM Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, Community Manager, RMResearcher, Member, University Professor Posts: 1,751 RM Founder

    And last but not least: the operator "Explain Predictions" is creating those local explanations in a non-manual way as well.  It is an improved version of LIME which is in general faster and can also handle all data and prediction types.  This improved algorithm is also what is used in the background of the Model Simulator.  If you apply the operator, you will get a result like the following:

     

    explain_predictions.png

     

    The process XML to generate something like this is below.

     

    <?xml version="1.0" encoding="UTF-8"?><process version="8.2.001">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="8.2.001" expanded="true" name="Process">
    <process expanded="true">
    <operator activated="true" class="retrieve" compatibility="8.2.001" expanded="true" height="68" name="Retrieve Titanic Training" width="90" x="45" y="34">
    <parameter key="repository_entry" value="//Samples/data/Titanic Training"/>
    </operator>
    <operator activated="true" class="h2o:deep_learning" compatibility="8.2.000" expanded="true" height="82" name="Deep Learning" width="90" x="179" y="34">
    <enumeration key="hidden_layer_sizes">
    <parameter key="hidden_layer_sizes" value="50"/>
    <parameter key="hidden_layer_sizes" value="50"/>
    </enumeration>
    <enumeration key="hidden_dropout_ratios"/>
    <list key="expert_parameters"/>
    <list key="expert_parameters_"/>
    </operator>
    <operator activated="true" class="retrieve" compatibility="8.2.001" expanded="true" height="68" name="Retrieve Titanic Unlabeled" width="90" x="179" y="136">
    <parameter key="repository_entry" value="//Samples/data/Titanic Unlabeled"/>
    </operator>
    <operator activated="true" class="model_simulator:explain_predictions" compatibility="8.3.000-SNAPSHOT" expanded="true" height="103" name="Explain Predictions" width="90" x="313" y="34"/>
    <connect from_op="Retrieve Titanic Training" from_port="output" to_op="Deep Learning" to_port="training set"/>
    <connect from_op="Deep Learning" from_port="model" to_op="Explain Predictions" to_port="model"/>
    <connect from_op="Deep Learning" from_port="exampleSet" to_op="Explain Predictions" to_port="training data"/>
    <connect from_op="Retrieve Titanic Unlabeled" from_port="output" to_op="Explain Predictions" to_port="test data"/>
    <connect from_op="Explain Predictions" from_port="visualization output" to_port="result 1"/>
    <connect from_op="Explain Predictions" from_port="example set output" to_port="result 2"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    <portSpacing port="sink_result 3" spacing="0"/>
    </process>
    </operator>
    </process>

    Hope this helps,

    Ingo

  • earmijoearmijo Member Posts: 271 Unicorn

    I just finished teaching a course @ an MBA program in which I use RM as the main software program. I mentioned in class that it would be great if one could explain some of the powerful black boxes available in RM. Then I switched to R and the Lime library and demostrated how to do it for a credit scoring example.

     

    I was about to enter a request in the "Ideas" section about implementing LIME in RM. Wisely, I search the forum first :-)

     

    This is fantastic. RM is getting better and better and better.

Sign In or Register to comment.