The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Join / Append /Merge Multiple TD-IDF Example Sets or recompute ?

mobmob Member Posts: 37 Contributor II
edited April 2020 in Help
I'm trying to compare documents from 2 datasets with the data to similarity operator but I'm not sure how to join/merge/append the data sets which contain the TF-IDF results for each word

I can't join because there isn't a common ID
I can't append because there are different tokens in each dataset but I expect there to be some common ones as well
There are also different attribute counts in each dataset (20,000 attributes plus in each example set)

The datasets required different pre-processing to end up with TD-IDF so can I really recompute TD-IDF if I can figure out how to merge the original datasets into 1 before calculating the TD-IDF?

Answers

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
    Hi mob,

    have you tried to use cross distances instead of data to similarity?

    ~martin
    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • mobmob Member Posts: 37 Contributor II
    Doesn't it require "the same attributes and in the same order." is it possible to order tokens with td-idf and identify which attributes I need to generate and append to the request set ?

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
    The order should be no problem. You can generate the same tokens using the wordlist.

    I think about something like this:

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <process version="7.0.000">
      <context>
        <input/>
        <output/>
        <macros/>
      </context>
      <operator activated="true" class="process" compatibility="7.0.000" expanded="true" name="Process">
        <process expanded="true">
          <operator activated="true" class="text:create_document" compatibility="6.5.000" expanded="true" height="68" name="Create Document" width="90" x="45" y="34">
            <parameter key="text" value="This is one text"/>
          </operator>
          <operator activated="true" class="text:process_documents" compatibility="6.5.000" expanded="true" height="103" name="Process Documents" width="90" x="246" y="34">
            <process expanded="true">
              <operator activated="true" class="text:tokenize" compatibility="6.5.000" expanded="true" height="68" name="Tokenize" width="90" x="112" y="34"/>
              <connect from_port="document" to_op="Tokenize" to_port="document"/>
              <connect from_op="Tokenize" from_port="document" to_port="document 1"/>
              <portSpacing port="source_document" spacing="0"/>
              <portSpacing port="sink_document 1" spacing="0"/>
              <portSpacing port="sink_document 2" spacing="0"/>
            </process>
          </operator>
          <operator activated="true" class="text:create_document" compatibility="6.5.000" expanded="true" height="68" name="Create Document (2)" width="90" x="45" y="187">
            <parameter key="text" value="And this is the other text"/>
          </operator>
          <operator activated="true" class="text:process_documents" compatibility="6.5.000" expanded="true" height="103" name="Process Documents (2)" width="90" x="380" y="136">
            <process expanded="true">
              <operator activated="true" class="text:tokenize" compatibility="6.5.000" expanded="true" height="68" name="Tokenize (2)" width="90" x="112" y="34"/>
              <connect from_port="document" to_op="Tokenize (2)" to_port="document"/>
              <connect from_op="Tokenize (2)" from_port="document" to_port="document 1"/>
              <portSpacing port="source_document" spacing="0"/>
              <portSpacing port="sink_document 1" spacing="0"/>
              <portSpacing port="sink_document 2" spacing="0"/>
            </process>
          </operator>
          <operator activated="true" class="generate_id" compatibility="7.0.000" expanded="true" height="82" name="Generate ID" width="90" x="514" y="136">
            <parameter key="offset" value="500"/>
          </operator>
          <operator activated="true" class="generate_id" compatibility="7.0.000" expanded="true" height="82" name="Generate ID (2)" width="90" x="514" y="34">
            <parameter key="offset" value="1"/>
          </operator>
          <operator activated="true" class="cross_distances" compatibility="7.0.000" expanded="true" height="103" name="Cross Distances" width="90" x="648" y="85"/>
          <operator activated="true" class="join" compatibility="7.0.000" expanded="true" height="82" name="Join" width="90" x="782" y="34">
            <parameter key="remove_double_attributes" value="false"/>
            <parameter key="use_id_attribute_as_key" value="false"/>
            <list key="key_attributes">
              <parameter key="request" value="id"/>
            </list>
          </operator>
          <operator activated="true" class="join" compatibility="7.0.000" expanded="true" height="82" name="Join (2)" width="90" x="916" y="85">
            <parameter key="remove_double_attributes" value="false"/>
            <parameter key="use_id_attribute_as_key" value="false"/>
            <list key="key_attributes">
              <parameter key="document" value="id"/>
            </list>
          </operator>
          <connect from_op="Create Document" from_port="output" to_op="Process Documents" to_port="documents 1"/>
          <connect from_op="Process Documents" from_port="example set" to_op="Generate ID (2)" to_port="example set input"/>
          <connect from_op="Process Documents" from_port="word list" to_op="Process Documents (2)" to_port="word list"/>
          <connect from_op="Create Document (2)" from_port="output" to_op="Process Documents (2)" to_port="documents 1"/>
          <connect from_op="Process Documents (2)" from_port="example set" to_op="Generate ID" to_port="example set input"/>
          <connect from_op="Generate ID" from_port="example set output" to_op="Cross Distances" to_port="reference set"/>
          <connect from_op="Generate ID (2)" from_port="example set output" to_op="Cross Distances" to_port="request set"/>
          <connect from_op="Cross Distances" from_port="result set" to_op="Join" to_port="left"/>
          <connect from_op="Cross Distances" from_port="request set" to_op="Join" to_port="right"/>
          <connect from_op="Cross Distances" from_port="reference set" to_op="Join (2)" to_port="right"/>
          <connect from_op="Join" from_port="join" to_op="Join (2)" to_port="left"/>
          <connect from_op="Join (2)" from_port="join" to_port="result 1"/>
          <portSpacing port="source_input 1" spacing="0"/>
          <portSpacing port="sink_result 1" spacing="0"/>
          <portSpacing port="sink_result 2" spacing="0"/>
        </process>
      </operator>
    </process>

    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • mobmob Member Posts: 37 Contributor II
    Is there a reason why you didn't use cosine similarity and compute similarities?
  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist
    no, i simply used standard settings :D
    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • mobmob Member Posts: 37 Contributor II
    And is it defend-able to compare "data to similarity" results to "cross distances" or am I comparing apples to oranges?
Sign In or Register to comment.