The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

"Calculate macro- and micro-averaged f-measure on multiclass data"

text_minertext_miner Member Posts: 11 Contributor II
edited May 2019 in Help
Hello,

I was wondering if RapidMiner has the ability to calculate the macro- and micro-averaged f-measure for multiclass data (i.e., more than two classes).  I know that when I work with binomial data the Binomial Classification Performance operator has an option for f-measure.  However, the multiclass version of the Performance operator does not calculate f-measure (as far as I can tell).

As an alternative, is there a way to capture the confusion matrix from the Performance operator (for logging purposes)?  Again, I see that the binomial version of the operator has this capability, but I don't see such options in the multiclass version.

Below is a sample of my process.
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.1.004">
  <context>
    <input/>
    <output/>
    <macros/>
  </context>
  <operator activated="true" class="process" compatibility="5.1.004" expanded="true" name="Process">
    <process expanded="true" height="591" width="752">
      <operator activated="true" class="generate_data" compatibility="5.1.004" expanded="true" height="60" name="Generate Data" width="90" x="37" y="30">
        <parameter key="target_function" value="multi classification"/>
      </operator>
      <operator activated="true" class="x_validation" compatibility="5.1.004" expanded="true" height="112" name="Validation" width="90" x="179" y="30">
        <process expanded="true" height="591" width="351">
          <operator activated="true" class="support_vector_machine_libsvm" compatibility="5.1.004" expanded="true" height="76" name="SVM" width="90" x="112" y="30">
            <list key="class_weights"/>
          </operator>
          <connect from_port="training" to_op="SVM" to_port="training set"/>
          <connect from_op="SVM" from_port="model" to_port="model"/>
          <portSpacing port="source_training" spacing="0"/>
          <portSpacing port="sink_model" spacing="0"/>
          <portSpacing port="sink_through 1" spacing="0"/>
        </process>
        <process expanded="true" height="591" width="351">
          <operator activated="true" class="apply_model" compatibility="5.1.004" expanded="true" height="76" name="Apply Model" width="90" x="45" y="30">
            <list key="application_parameters"/>
          </operator>
          <operator activated="true" class="performance_classification" compatibility="5.1.004" expanded="true" height="76" name="Performance" width="90" x="179" y="30">
            <parameter key="classification_error" value="true"/>
            <parameter key="kappa" value="true"/>
            <parameter key="weighted_mean_recall" value="true"/>
            <parameter key="weighted_mean_precision" value="true"/>
            <parameter key="spearman_rho" value="true"/>
            <parameter key="kendall_tau" value="true"/>
            <parameter key="absolute_error" value="true"/>
            <parameter key="relative_error" value="true"/>
            <parameter key="relative_error_lenient" value="true"/>
            <parameter key="relative_error_strict" value="true"/>
            <parameter key="normalized_absolute_error" value="true"/>
            <parameter key="root_mean_squared_error" value="true"/>
            <parameter key="root_relative_squared_error" value="true"/>
            <parameter key="squared_error" value="true"/>
            <parameter key="correlation" value="true"/>
            <parameter key="squared_correlation" value="true"/>
            <parameter key="cross-entropy" value="true"/>
            <parameter key="margin" value="true"/>
            <parameter key="soft_margin_loss" value="true"/>
            <parameter key="logistic_loss" value="true"/>
            <list key="class_weights"/>
          </operator>
          <connect from_port="model" to_op="Apply Model" to_port="model"/>
          <connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/>
          <connect from_op="Apply Model" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
          <connect from_op="Performance" from_port="performance" to_port="averagable 1"/>
          <portSpacing port="source_model" spacing="0"/>
          <portSpacing port="source_test set" spacing="0"/>
          <portSpacing port="source_through 1" spacing="0"/>
          <portSpacing port="sink_averagable 1" spacing="0"/>
          <portSpacing port="sink_averagable 2" spacing="0"/>
        </process>
      </operator>
      <connect from_op="Generate Data" from_port="output" to_op="Validation" to_port="training"/>
      <connect from_op="Validation" from_port="training" to_port="result 1"/>
      <connect from_op="Validation" from_port="averagable 1" to_port="result 2"/>
      <portSpacing port="source_input 1" spacing="0"/>
      <portSpacing port="sink_result 1" spacing="0"/>
      <portSpacing port="sink_result 2" spacing="0"/>
      <portSpacing port="sink_result 3" spacing="0"/>
    </process>
  </operator>
</process>
Any help would be greatly appreciated.

Thanks!
Tagged:

Answers

  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    could you tell me, how the f-measure is defined on a multiclass problem? I thought it is only defined for two classes...

    You can use the Reporting Extension to document the performance.

    Greetings,
      Sebastian
  • text_minertext_miner Member Posts: 11 Contributor II
    Hi Sebastian,

    Thanks for the tip on the Reporting Extension.  I'll play around with that and see how to incorporate it into my process.

    The following paper provides definitions on page 611 of how to calculate both the macro- and micro-averaged f-measure with multiple categories.

    http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.8244&rep=rep1&type=pdf

    A Özgür, L Özgür and T Güngör. (2005). "Text Categorization with Class-Based and Corpus-Based Keyword Selection." Lecture Notes in Computer Science, 2005, Volume 3733/2005, 606-615.

    In general, the micro-averaged f-measure uses the global precision and recall, while the macro-averaged f-measure averages the f-measure for each category.  The macro-averaged f-measure can also be weighted by class size.

    Note: the micro-average f-measure is the same as accuracy when you are dealing with single-label classification.

    I also found that Weka provides methods to calculate these measures in the weka.classifiers.Evaluation class (https://svn.scms.waikato.ac.nz/svn/weka/trunk/weka/src/main/java/weka/classifiers/Evaluation.java).

    Thanks!
  • landland RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 2,531 Unicorn
    Hi,
    I see. Uhm. Please add a feature request for that in the bug tracker!

    Greetings,
    Sebastian
Sign In or Register to comment.