The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Weight TF-IDF

evanshevansh Member Posts: 6 Contributor II
edited April 2020 in Help
Hey all,

I'm using the Process Documents operator to output a tokenized word vector for each document, with the TF-IDF calculated. I'd also like to weight the TF-IDF by the number of tokens in each document. I have the number of tokens (Num_Tokens) calculated for each document, but I can't figure out a way to divide TF-IDF by Num_Tokens for each term in each document. Any tips? Thanks!
Tagged:

Answers

  • evanshevansh Member Posts: 6 Contributor II
    Bump. Could really use a hand.
  • JEdwardJEdward RapidMiner Certified Analyst, RapidMiner Certified Expert, Member Posts: 578 Unicorn
    I'm a little tired this morning, but if I'm reading correctly you have

    TFIDF calculated for each attribute in your dataset with each example representing one document.
    An additional attribute showing the number of tokens in each document.

    And you want to calculate the TFIDF / Num_tokens for each example & each attribute?

    If this is the right interpretation I'd recommend using Generate Attributes inside a Loop Attributes operator. 
    This will loop through all your TFIDF attributes and then (using a macro) you can divide that attribute by your num_tokens value. 

    Hope that helps
  • evanshevansh Member Posts: 6 Contributor II
    This is perfect; thank you so much for the help!
  • evanshevansh Member Posts: 6 Contributor II
    Had a follow up question on the same process, so I figured I'd open this back up. The above solution does exactly what I need it to do. However, my example set has around 160 million data points, and so the loop attributes operator takes almost a day to run. I get that my data set is large, but I wouldn't think that performing 160 million divisions should take nearly 24 hours, am I missing something? Is there any way to make this run more efficiently?
Sign In or Register to comment.