The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Feature Selection - Backward X Val

B_MinerB_Miner Member Posts: 72 Contributor II
edited June 2019 in Help
Hi guys-

I am running a feature selection. I included the direct mail generated data set to be replicatable.

As I have it configured, I was under the impression that the backward algorithm of FS should:

1) start with all 'p' predictors, and use 10 fold x-validation to get an accuracy figure.
2) drop the least important predictor and use 10 fold x-validation to get an accuracy figure using the p-1 predictors.
3) continue until down to 1 predictor or unless a stopping criteria is reached (limit generations without improval is checked).

Looking at the process log, this is not the case. It also seems that not the full 10 folds are being done.

Thanks!


<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.0">
 <context>
   <input>
     <location/>
   </input>
   <output>
     <location/>
     <location/>
     <location/>
     <location/>
   </output>
   <macros/>
 </context>
 <operator activated="true" class="process" expanded="true" name="Root">
   <description>&lt;p&gt; Transformations of the attribute space may ease learning in a way, that simple learning schemes may be able to learn complex functions. This is the basic idea of the kernel trick. But even without kernel based learning schemes the transformation of feature space may be necessary to reach good learning results. &lt;/p&gt;  &lt;p&gt; RapidMiner offers several different feature selection, construction, and extraction methods. This selection process (the well known forward selection) uses an inner cross validation for performance estimation. This building block serves as fitness evaluation for all candidate feature sets. Since the performance of a certain learning scheme is taken into account we refer to processes of this type as &amp;quot;wrapper approaches&amp;quot;.&lt;/p&gt;  &lt;p&gt;Additionally the process log operator plots intermediate results. You can inspect them online in the Results tab. Please refer to the visualization sample processes or the RapidMiner tutorial for further details.&lt;/p&gt;  &lt;p&gt; Try the following: &lt;ul&gt; &lt;li&gt;Start the process and change to &amp;quot;Result&amp;quot; view. There can be a plot selected. Plot the &amp;quot;performance&amp;quot; against the &amp;quot;generation&amp;quot; of the feature selection operator.&lt;/li&gt; &lt;li&gt;Select the feature selection operator in the tree view. Change the search directory from forward (forward selection) to backward (backward elimination). Restart the process. All features will be selected.&lt;/li&gt; &lt;li&gt;Select the feature selection operator. Right click to open the context menu and repace the operator by another feature selection scheme (for example a genetic algorithm).&lt;/li&gt; &lt;li&gt;Have a look at the list of the process log operator. Every time it is applied it collects the specified data. Please refer to the RapidMiner Tutorial for further explanations. After changing the feature selection operator to the genetic algorithm approach, you have to specify the correct values. &lt;table&gt;&lt;tr&gt;&lt;td&gt;&lt;icon&gt;groups/24/visualization&lt;/icon&gt;&lt;/td&gt;&lt;td&gt;&lt;i&gt;Use the process log operator to log values online.&lt;/i&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/p&gt;</description>
   <process expanded="true" height="280" width="561">
     <operator activated="true" class="generate_direct_mailing_data" expanded="true" height="60" name="Generate Direct Mailing Data" width="90" x="16" y="52">
       <parameter key="number_examples" value="10000"/>
       <parameter key="use_local_random_seed" value="true"/>
     </operator>
     <operator activated="true" class="optimize_selection" expanded="true" height="94" name="FS" width="90" x="246" y="30">
       <parameter key="selection_direction" value="backward"/>
       <parameter key="limit_generations_without_improval" value="false"/>
       <parameter key="use_local_random_seed" value="true"/>
       <process expanded="true" height="275" width="561">
         <operator activated="true" class="x_validation" expanded="true" height="112" name="XValidation" width="90" x="45" y="30">
           <parameter key="sampling_type" value="shuffled sampling"/>
           <process expanded="true" height="275" width="255">
             <operator activated="true" class="decision_tree" expanded="true" height="76" name="Decision Tree" width="90" x="45" y="30"/>
             <connect from_port="training" to_op="Decision Tree" to_port="training set"/>
             <connect from_op="Decision Tree" from_port="model" to_port="model"/>
             <portSpacing port="source_training" spacing="0"/>
             <portSpacing port="sink_model" spacing="0"/>
             <portSpacing port="sink_through 1" spacing="0"/>
           </process>
           <process expanded="true" height="296" width="413">
             <operator activated="true" class="apply_model" expanded="true" height="76" name="Applier" width="90" x="45" y="75">
               <list key="application_parameters"/>
             </operator>
             <operator activated="true" class="performance" expanded="true" height="76" name="Performance" width="90" x="179" y="120"/>
             <operator activated="true" class="log" expanded="true" height="76" name="Log" width="90" x="313" y="210">
               <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresultsdetails.log"/>
               <list key="log">
                 <parameter key="performance" value="operator.Performance.value.performance"/>
                 <parameter key="iteration" value="operator.XValidation.value.iteration"/>
                 <parameter key="generation" value="operator.FS.value.generation"/>
                 <parameter key="featureNames" value="operator.FS.value.feature_names"/>
               </list>
               <parameter key="persistent" value="true"/>
             </operator>
             <connect from_port="model" to_op="Applier" to_port="model"/>
             <connect from_port="test set" to_op="Applier" to_port="unlabelled data"/>
             <connect from_op="Applier" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
             <connect from_op="Performance" from_port="performance" to_op="Log" to_port="through 1"/>
             <connect from_op="Log" from_port="through 1" to_port="averagable 1"/>
             <portSpacing port="source_model" spacing="0"/>
             <portSpacing port="source_test set" spacing="0"/>
             <portSpacing port="source_through 1" spacing="0"/>
             <portSpacing port="sink_averagable 1" spacing="0"/>
             <portSpacing port="sink_averagable 2" spacing="0"/>
           </process>
         </operator>
         <operator activated="true" class="log" expanded="true" height="76" name="ProcessLog" width="90" x="303" y="48">
           <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresults.log"/>
           <list key="log">
             <parameter key="performance" value="operator.Performance.value.performance"/>
             <parameter key="feature_names" value="operator.FS.value.feature_names"/>
             <parameter key="generation" value="operator.FS.value.generation"/>
           </list>
         </operator>
         <connect from_port="example set" to_op="XValidation" to_port="training"/>
         <connect from_op="XValidation" from_port="averagable 1" to_op="ProcessLog" to_port="through 1"/>
         <connect from_op="ProcessLog" from_port="through 1" to_port="performance"/>
         <portSpacing port="source_example set" spacing="0"/>
         <portSpacing port="source_through 1" spacing="0"/>
         <portSpacing port="sink_performance" spacing="0"/>
       </process>
     </operator>
     <connect from_op="Generate Direct Mailing Data" from_port="output" to_op="FS" to_port="example set in"/>
     <connect from_op="FS" from_port="example set out" to_port="result 1"/>
     <connect from_op="FS" from_port="weights" to_port="result 2"/>
     <connect from_op="FS" from_port="performance" to_port="result 3"/>
     <portSpacing port="source_input 1" spacing="0"/>
     <portSpacing port="sink_result 1" spacing="0"/>
     <portSpacing port="sink_result 2" spacing="0"/>
     <portSpacing port="sink_result 3" spacing="0"/>
     <portSpacing port="sink_result 4" spacing="0"/>
   </process>
 </operator>
</process>





Answers

  • B_MinerB_Miner Member Posts: 72 Contributor II
    OK, I think I found that I was using a depreciated operator (FS)? On these posts it is hard to know (I copied one I found).

    So... here is the new code. The log operator tracking the inner process (there are 10 performance metrics for each of the 10 folds per generation) seems to work ok, but the outer one, which I was hoping to track the average of the x-fold validation gives multiple (and a declining number of them) performance metrics per generation (should there not just be one metric - the average - output for each generation)?

    Also, do all these feature selection processes produce a 1 or 0 (where 1 means retain)? Or is there a way to rank them?

    Finally - can you feed the results of feature selection into a model and only have the important (i guess weight=1) attributes used?

    Thanks!


    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <process version="5.0">
      <context>
        <input>
          <location/>
        </input>
        <output>
          <location/>
          <location/>
        </output>
        <macros/>
      </context>
      <operator activated="true" class="process" expanded="true" name="Root">
        <description>&lt;p&gt; Transformations of the attribute space may ease learning in a way, that simple learning schemes may be able to learn complex functions. This is the basic idea of the kernel trick. But even without kernel based learning schemes the transformation of feature space may be necessary to reach good learning results. &lt;/p&gt;  &lt;p&gt; RapidMiner offers several different feature selection, construction, and extraction methods. This selection process (the well known forward selection) uses an inner cross validation for performance estimation. This building block serves as fitness evaluation for all candidate feature sets. Since the performance of a certain learning scheme is taken into account we refer to processes of this type as &amp;quot;wrapper approaches&amp;quot;.&lt;/p&gt;  &lt;p&gt;Additionally the process log operator plots intermediate results. You can inspect them online in the Results tab. Please refer to the visualization sample processes or the RapidMiner tutorial for further details.&lt;/p&gt;  &lt;p&gt; Try the following: &lt;ul&gt; &lt;li&gt;Start the process and change to &amp;quot;Result&amp;quot; view. There can be a plot selected. Plot the &amp;quot;performance&amp;quot; against the &amp;quot;generation&amp;quot; of the feature selection operator.&lt;/li&gt; &lt;li&gt;Select the feature selection operator in the tree view. Change the search directory from forward (forward selection) to backward (backward elimination). Restart the process. All features will be selected.&lt;/li&gt; &lt;li&gt;Select the feature selection operator. Right click to open the context menu and repace the operator by another feature selection scheme (for example a genetic algorithm).&lt;/li&gt; &lt;li&gt;Have a look at the list of the process log operator. Every time it is applied it collects the specified data. Please refer to the RapidMiner Tutorial for further explanations. After changing the feature selection operator to the genetic algorithm approach, you have to specify the correct values. &lt;table&gt;&lt;tr&gt;&lt;td&gt;&lt;icon&gt;groups/24/visualization&lt;/icon&gt;&lt;/td&gt;&lt;td&gt;&lt;i&gt;Use the process log operator to log values online.&lt;/i&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt; &lt;/li&gt; &lt;/ul&gt; &lt;/p&gt;</description>
        <process expanded="true" height="280" width="561">
          <operator activated="true" class="generate_direct_mailing_data" expanded="true" height="60" name="Generate Direct Mailing Data" width="90" x="16" y="52">
            <parameter key="number_examples" value="10000"/>
            <parameter key="use_local_random_seed" value="true"/>
          </operator>
          <operator activated="true" class="optimize_selection" expanded="true" height="94" name="Optimize Selection" width="90" x="288" y="32">
            <parameter key="selection_direction" value="backward"/>
            <parameter key="limit_generations_without_improval" value="false"/>
            <process expanded="true" height="257" width="543">
              <operator activated="true" class="x_validation" expanded="true" height="112" name="XValidation" width="90" x="45" y="30">
                <parameter key="sampling_type" value="shuffled sampling"/>
                <process expanded="true" height="275" width="255">
                  <operator activated="true" class="decision_tree" expanded="true" height="76" name="Decision Tree" width="90" x="45" y="30"/>
                  <connect from_port="training" to_op="Decision Tree" to_port="training set"/>
                  <connect from_op="Decision Tree" from_port="model" to_port="model"/>
                  <portSpacing port="source_training" spacing="0"/>
                  <portSpacing port="sink_model" spacing="0"/>
                  <portSpacing port="sink_through 1" spacing="0"/>
                </process>
                <process expanded="true" height="296" width="413">
                  <operator activated="true" class="apply_model" expanded="true" height="76" name="Applier" width="90" x="45" y="75">
                    <list key="application_parameters"/>
                  </operator>
                  <operator activated="true" class="performance" expanded="true" height="76" name="Performance" width="90" x="179" y="120"/>
                  <operator activated="true" class="log" expanded="true" height="76" name="Log" width="90" x="313" y="210">
                    <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresultsdetails.log"/>
                    <list key="log">
                      <parameter key="performance" value="operator.Performance.value.performance"/>
                      <parameter key="iteration" value="operator.XValidation.value.iteration"/>
                      <parameter key="generation" value="operator.Optimize Selection.value.generation"/>
                      <parameter key="featureNames" value="operator.Optimize Selection.value.feature_names"/>
                    </list>
                    <parameter key="persistent" value="true"/>
                  </operator>
                  <connect from_port="model" to_op="Applier" to_port="model"/>
                  <connect from_port="test set" to_op="Applier" to_port="unlabelled data"/>
                  <connect from_op="Applier" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
                  <connect from_op="Performance" from_port="performance" to_op="Log" to_port="through 1"/>
                  <connect from_op="Log" from_port="through 1" to_port="averagable 1"/>
                  <portSpacing port="source_model" spacing="0"/>
                  <portSpacing port="source_test set" spacing="0"/>
                  <portSpacing port="source_through 1" spacing="0"/>
                  <portSpacing port="sink_averagable 1" spacing="0"/>
                  <portSpacing port="sink_averagable 2" spacing="0"/>
                </process>
              </operator>
              <operator activated="true" class="log" expanded="true" height="76" name="ProcessLog" width="90" x="303" y="48">
                <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresults.log"/>
                <list key="log">
                  <parameter key="performance" value="operator.XValidation.value.performance"/>
                  <parameter key="feature_names" value="operator.Optimize Selection.value.feature_names"/>
                  <parameter key="generation" value="operator.Optimize Selection.value.generation"/>
                </list>
              </operator>
              <connect from_port="example set" to_op="XValidation" to_port="training"/>
              <connect from_op="XValidation" from_port="averagable 1" to_op="ProcessLog" to_port="through 1"/>
              <connect from_op="ProcessLog" from_port="through 1" to_port="performance"/>
              <portSpacing port="source_example set" spacing="0"/>
              <portSpacing port="source_through 1" spacing="0"/>
              <portSpacing port="sink_performance" spacing="0"/>
            </process>
          </operator>
          <connect from_op="Generate Direct Mailing Data" from_port="output" to_op="Optimize Selection" to_port="example set in"/>
          <connect from_op="Optimize Selection" from_port="example set out" to_port="result 3"/>
          <connect from_op="Optimize Selection" from_port="weights" to_port="result 2"/>
          <connect from_op="Optimize Selection" from_port="performance" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
        </process>
      </operator>
    </process>





  • B_MinerB_Miner Member Posts: 72 Contributor II
    Hi all-

    Just curious if anyone has insight on this. Thanks!!
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    it seems to me, I have somehow overlooked your questions a few times. Guess have read it sometimes and forgot that I didn't answer it, yet. But now here's my answer:

    I would suggest the new backward elimination and forward selection operator for this purpose. They are faster, consume less memory and are much more stable. Last but not least they offer better stopping criteria.

    Greetings,
    Sebastian
  • cherokeecherokee Member Posts: 82 Maven
    Hi B_Miner,

    just a bit to your direct questions:

    Multiple (but decreasing) Performance Vectors: The output of your outter log is not the average of a generation. It is the average of the 10-fold cross validation -- the average of one individual. Assume you have 10 features. In the first step backward elimination must create 10 feature combinations (each time leaving one out). Each of these combinations must be tested (run to the XVal). So you get 10 averages. The operator then chooses the best combination. In the next step it has to test 9 combinations (each time leaving one of the remaining 9 features out), and so on. This way you see multiple averages (for every possible feature combination) but with decreasing numbers (as there are less possible combinations over time).

    Putting feature selection in a model: This not directly possible. You have to store the feature weights (actually only 1s and 0s). Then you can use those weights with the operator Select by Weight. Just select every attribute with weight greater or equal to 1.

    This is (afaik) also the case for the new Operators mentioned by Sebastian.

    Best regards,
    chero
  • B_MinerB_Miner Member Posts: 72 Contributor II
    That is an extremely helpful explanation - I assumed I was not familiar with what was happening.

    Thanks a lot!

    Brian
  • B_MinerB_Miner Member Posts: 72 Contributor II
    Hi Cherokee, Can I ask a follow-up?

    So that I can understand what this algorithm does, is this correct?

    Step 1, take all the 8 predictors and create eight runs, where in each, only 7 of the predictors are included. Run each of these subsets through 10-fold x validation. So my inner log should have 80 accuracy measures and the outer log should record 8 (the average of each of the 10-fold cross validations).

    For this part, I see that what I actually get from the outer loop is the LAST value from each of the 10 fold x validations, not the mean. (?!) 

    Step 2, repeat Step 1 but using only 7 predictors where the 7 predictors are chosen as picking the best subset (7, in essence dropping one predictor) from the cross validation.


    Do you know how the final '1' instead of '0' are chosen in the final output?


    Here is my code again:


    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <process version="5.0">
      <context>
        <input>
          <location/>
        </input>
        <output>
          <location/>
          <location/>
          <location/>
          <location/>
        </output>
        <macros/>
      </context>
      <operator activated="true" class="process" expanded="true" name="Root">

        <process expanded="true" height="280" width="561">
          <operator activated="true" class="generate_direct_mailing_data" expanded="true" height="60" name="Generate Direct Mailing Data" width="90" x="16" y="52">
            <parameter key="number_examples" value="10000"/>
            <parameter key="use_local_random_seed" value="true"/>
          </operator>
          <operator activated="true" class="optimize_selection" expanded="true" height="94" name="Optimize Selection" width="90" x="288" y="32">
            <parameter key="selection_direction" value="backward"/>
            <parameter key="limit_generations_without_improval" value="false"/>
            <process expanded="true" height="257" width="543">
              <operator activated="true" class="x_validation" expanded="true" height="112" name="XValidation" width="90" x="45" y="30">
                <parameter key="sampling_type" value="shuffled sampling"/>
                <process expanded="true" height="275" width="255">
                  <operator activated="true" class="decision_tree" expanded="true" height="76" name="Decision Tree" width="90" x="45" y="30"/>
                  <connect from_port="training" to_op="Decision Tree" to_port="training set"/>
                  <connect from_op="Decision Tree" from_port="model" to_port="model"/>
                  <portSpacing port="source_training" spacing="0"/>
                  <portSpacing port="sink_model" spacing="0"/>
                  <portSpacing port="sink_through 1" spacing="0"/>
                </process>
                <process expanded="true" height="296" width="413">
                  <operator activated="true" class="apply_model" expanded="true" height="76" name="Applier" width="90" x="45" y="75">
                    <list key="application_parameters"/>
                  </operator>
                  <operator activated="true" class="performance" expanded="true" height="76" name="Performance" width="90" x="179" y="120"/>
                  <operator activated="true" class="log" expanded="true" height="76" name="Log" width="90" x="313" y="210">
                    <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresultsdetails.log"/>
                    <list key="log">
                      <parameter key="performance" value="operator.Performance.value.performance"/>
                      <parameter key="iteration" value="operator.XValidation.value.iteration"/>
                      <parameter key="generation" value="operator.Optimize Selection.value.generation"/>
                      <parameter key="featureNames" value="operator.Optimize Selection.value.feature_names"/>
                    </list>
                    <parameter key="persistent" value="true"/>
                  </operator>
                  <connect from_port="model" to_op="Applier" to_port="model"/>
                  <connect from_port="test set" to_op="Applier" to_port="unlabelled data"/>
                  <connect from_op="Applier" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
                  <connect from_op="Performance" from_port="performance" to_op="Log" to_port="through 1"/>
                  <connect from_op="Log" from_port="through 1" to_port="averagable 1"/>
                  <portSpacing port="source_model" spacing="0"/>
                  <portSpacing port="source_test set" spacing="0"/>
                  <portSpacing port="source_through 1" spacing="0"/>
                  <portSpacing port="sink_averagable 1" spacing="0"/>
                  <portSpacing port="sink_averagable 2" spacing="0"/>
                </process>
              </operator>
              <operator activated="true" class="log" expanded="true" height="76" name="ProcessLog" width="90" x="303" y="48">
                <parameter key="filename" value="C:\Documents and Settings\aiufh35\Desktop\outputresults.log"/>
                <list key="log">
                  <parameter key="performance" value="operator.XValidation.value.performance"/>
                  <parameter key="feature_names" value="operator.Optimize Selection.value.feature_names"/>
                  <parameter key="generation" value="operator.Optimize Selection.value.generation"/>
                </list>
              </operator>
              <connect from_port="example set" to_op="XValidation" to_port="training"/>
              <connect from_op="XValidation" from_port="averagable 1" to_op="ProcessLog" to_port="through 1"/>
              <connect from_op="ProcessLog" from_port="through 1" to_port="performance"/>
              <portSpacing port="source_example set" spacing="0"/>
              <portSpacing port="source_through 1" spacing="0"/>
              <portSpacing port="sink_performance" spacing="0"/>
            </process>
          </operator>
          <connect from_op="Generate Direct Mailing Data" from_port="output" to_op="Optimize Selection" to_port="example set in"/>
          <connect from_op="Optimize Selection" from_port="example set out" to_port="result 3"/>
          <connect from_op="Optimize Selection" from_port="weights" to_port="result 2"/>
          <connect from_op="Optimize Selection" from_port="performance" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
          <portSpacing port="sink_result 3" spacing="0"/>
          <portSpacing port="sink_result 4" spacing="0"/>
        </process>
      </operator>
    </process>





    Thanks so much for your help!
  • cherokeecherokee Member Posts: 82 Maven
    Hi B_Miner,

    shure, a folluw-up is no problem.
    B_Miner wrote:

    Step 1, take all the 8 predictors and create eight runs, where in each, only 7 of the predictors are included. Run each of these subsets through 10-fold x validation. So my inner log should have 80 accuracy measures and the outer log should record 8 (the average of each of the 10-fold cross validations).
    In general yes. You can change that behaviour a bit with the parameter "keep best".

    For this part, I see that what I actually get from the outer loop is the LAST value from each of the 10 fold x validations, not the mean. (?!) 
    Unfortunatelly I could replicate this behaviour. I see that this is happening but I don't know why. One of the developers should check on that. Hopefully it is just a problem with the deliverance of values not with the algorithm itself.

    Step 2, repeat Step 1 but using only 7 predictors where the 7 predictors are chosen as picking the best subset (7, in essence dropping one predictor) from the cross validation.
    Yes. But you can change how many descendants are kept with the parameter "keep best".

    Do you know how the final '1' instead of '0' are chosen in the final output?
    Well I don't know exactly what you mean here. Either you want to know (a) why the empty set isn't checked or you want to know (b) how the resulting feature combination is selected. For (a) I don't know the answer. It should be checked imho. Regarding (b): The final set is that checked set having the best performance value.

    Hope I could help,
    chero
  • B_MinerB_Miner Member Posts: 72 Contributor II
    Thanks Cherokee! I am getting the concept now and hopefully one of the developers can chime in on why the last value is being extracted from the xvalidation and not the mean of the 10 folds.

    Brian
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    where did you log the value from? The performance value of the cross validation will be the avarage of all previous iterations.

    Greetings,
      Sebastian
  • B_MinerB_Miner Member Posts: 72 Contributor II
    Hi Sebastian,

    The XML code is immediately above. The log is from the x-validation. It appears not to be the average but the last value per validation run. Does this answer your question?
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    it's really embarrassing, but there have been a bug in the XValidation, that found its way into the code during the porting of the operator to 5.0. We have removed that in the current developer version and it will not be in the final release.

    Greetings,
      Sebastian
  • B_MinerB_Miner Member Posts: 72 Contributor II
    Thanks!

    Is there a place to get a snapshot when bugs are fixed? For example. there was one in the text mining plugin that I found that was corrected. But im not sure where to find the newest app.
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    you could check out the newest developer version from svn on sourceforge. I'm not sure, if the extensions get mirrored there, but if not, I will advise the admin in charge of that to do it.
    Unfortunately we are really busy because of the CeBit. There's a lot work to do, so that we cannot make updates available as frequent as we would wish.

    Greetings,
      Sebastian
Sign In or Register to comment.