The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

[SOLVED] Different results for Generalized Hebbian (GHA) in loop

MaggiDMaggiD Member Posts: 3 Contributor I
edited April 2020 in Help
In my opinion the following problem constitutes a bug, but maybe I'm doing something wrong.

I am using the Generalized Hebbian Algorithm (GHA) Operator and everything works fine until I use it inside a "Loop parameters" Operator. I use the loop so that I can run the GHA Operator with different parameters for the number of components. However if the loop does multiple iterations I get different results compared to a single execution.
My investigation of the problem showed that beginning with the 2nd loop iteration the GHA operator is calculating a different premodel. There seems to be a side effect from the previous iteration.

My process:

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="6.0.006">
 <context>
   <input/>
   <output/>
   <macros/>
 </context>
 <operator activated="true" class="process" compatibility="6.0.006" expanded="true" name="Process">
   <process expanded="true">
     <operator activated="true" class="loop_parameters" compatibility="6.0.006" expanded="true" height="76" name="Loop Parameters" width="90" x="179" y="165">
       <list key="parameters">
         <parameter key="GHA.number_of_components" value="[3;4;2;linear]"/>
       </list>
       <process expanded="true">
         <operator activated="true" class="retrieve" compatibility="6.0.006" expanded="true" height="60" name="Retrieve w1024_training" width="90" x="45" y="75">
           <parameter key="repository_entry" value="//Anwendungsfall/w1024_training"/>
         </operator>
         <operator activated="true" class="generalized_hebbian_algorithm" compatibility="6.0.006" expanded="true" height="94" name="GHA" width="90" x="246" y="75">
           <parameter key="number_of_components" value="4"/>
         </operator>
         <operator activated="true" class="support_vector_machine" compatibility="6.0.006" expanded="true" height="112" name="SVM" width="90" x="380" y="75"/>
         <operator activated="true" class="retrieve" compatibility="6.0.006" expanded="true" height="60" name="Retrieve w1024_test_set" width="90" x="45" y="255">
           <parameter key="repository_entry" value="//Anwendungsfall/w1024_test_set"/>
         </operator>
         <operator activated="true" class="apply_model" compatibility="6.0.006" expanded="true" height="76" name="Apply Model" width="90" x="246" y="255">
           <list key="application_parameters"/>
         </operator>
         <operator activated="true" class="apply_model" compatibility="6.0.006" expanded="true" height="76" name="Apply Model (2)" width="90" x="380" y="255">
           <list key="application_parameters"/>
         </operator>
         <operator activated="true" class="performance_classification" compatibility="6.0.006" expanded="true" height="76" name="Performance" width="90" x="514" y="165">
           <list key="class_weights"/>
         </operator>
         <connect from_op="Retrieve w1024_training" from_port="output" to_op="GHA" to_port="example set input"/>
         <connect from_op="GHA" from_port="example set output" to_op="SVM" to_port="training set"/>
         <connect from_op="GHA" from_port="preprocessing model" to_op="Apply Model" to_port="model"/>
         <connect from_op="SVM" from_port="model" to_op="Apply Model (2)" to_port="model"/>
         <connect from_op="Retrieve w1024_test_set" from_port="output" to_op="Apply Model" to_port="unlabelled data"/>
         <connect from_op="Apply Model" from_port="labelled data" to_op="Apply Model (2)" to_port="unlabelled data"/>
         <connect from_op="Apply Model (2)" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
         <connect from_op="Performance" from_port="performance" to_port="result 1"/>
         <portSpacing port="source_input 1" spacing="0"/>
         <portSpacing port="sink_performance" spacing="0"/>
         <portSpacing port="sink_result 1" spacing="0"/>
         <portSpacing port="sink_result 2" spacing="0"/>
       </process>
     </operator>
     <operator activated="true" class="collect" compatibility="6.0.006" expanded="true" height="76" name="Collect" width="90" x="380" y="165"/>
     <connect from_op="Loop Parameters" from_port="result 1" to_op="Collect" to_port="input 1"/>
     <connect from_op="Collect" from_port="collection" to_port="result 1"/>
     <portSpacing port="source_input 1" spacing="0"/>
     <portSpacing port="sink_result 1" spacing="0"/>
     <portSpacing port="sink_result 2" spacing="0"/>
   </process>
 </operator>
</process>
[ /code]

Answers

  • homburghomburg Employee-RapidMiner, Member Posts: 114 RM Data Scientist
    Hi MaggiD,

    thanks for sending your process file. The problem you described will very likely occur due to a non-fixed random seed value. I'd recommend to check "use local random seed" option. Please note that you might have to enable the expert mode to see this option.

    Cheers,
    Helge
  • MaggiDMaggiD Member Posts: 3 Contributor I
    Thank you very much, I didn't know about random seed before  :)
Sign In or Register to comment.