The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Problems with n-gram and POS Tags operators
Hi everyone,
I'm working on my MA dissertation and I'm having trouble getting results from some of the operators. I'm extracting terminology from .txt files using the Process Documents from Files Operator. Within it, I used the sub-processes of Tokenize (non-letters), Stopwords (English), Transform Cases (lower cases), Filter Tokens by length, filter tokens by POS Tags in English (here the expression: FW.*|JJ.*|JJR.*|JJS.*|NN.*|NNS.*|RB.*|RP.*|VB.*|VBD.*|VBP.*|VBZ.*|VBG.*|VBN.*) and Generate up to 5 n-grams. Once I execute the process, there are no compound words and I don't know where the POS Tags should appear, because I don't see any tags anywhere. Here, the code:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.3.015">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="5.3.015" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="text:process_document_from_file" compatibility="5.3.002" expanded="true" height="76" name="Process Documents from Files" width="90" x="112" y="30">
<list key="text_directories">
<parameter key="hack" value="C:\Users\Dya\Documents\Univ. Catolica\Semestre 4\Tesis\Tesis 2014\Capitulos\Metodologia\Corpus\Corpus RapidMiner\Txt\Hack"/>
</list>
<parameter key="use_file_extension_as_type" value="false"/>
<parameter key="keep_text" value="true"/>
<process expanded="true">
<operator activated="true" class="text:filter_stopwords_english" compatibility="5.3.002" expanded="true" height="60" name="Filter Stopwords (English)" width="90" x="179" y="30"/>
<operator activated="true" class="text:transform_cases" compatibility="5.3.002" expanded="true" height="60" name="Transform Cases" width="90" x="179" y="165"/>
<operator activated="true" class="text:filter_tokens_by_pos" compatibility="5.3.002" expanded="true" height="60" name="Filter Tokens (by POS Tags)" width="90" x="313" y="30">
<parameter key="expression" value="FW.*|JJ.*|JJR.*|JJS.*|NN.*|NNS.*|RB.*|RP.*|VB.*|VBD.*|VBP.*|VBZ.*|VBG.*|VBN.*"/>
</operator>
<operator activated="true" class="text:generate_n_grams_terms" compatibility="5.3.002" expanded="true" height="60" name="Generate n-Grams (Terms)" width="90" x="313" y="165"/>
<operator activated="true" class="text:filter_by_length" compatibility="5.3.002" expanded="true" height="60" name="Filter Tokens (by Length)" width="90" x="447" y="30">
<parameter key="min_chars" value="3"/>
<parameter key="max_chars" value="20"/>
</operator>
<operator activated="true" class="text:tokenize" compatibility="5.3.002" expanded="true" height="60" name="Tokenize" width="90" x="447" y="165"/>
<connect from_port="document" to_op="Filter Stopwords (English)" to_port="document"/>
<connect from_op="Filter Stopwords (English)" from_port="document" to_op="Transform Cases" to_port="document"/>
<connect from_op="Transform Cases" from_port="document" to_op="Filter Tokens (by POS Tags)" to_port="document"/>
<connect from_op="Filter Tokens (by POS Tags)" from_port="document" to_op="Generate n-Grams (Terms)" to_port="document"/>
<connect from_op="Generate n-Grams (Terms)" from_port="document" to_op="Filter Tokens (by Length)" to_port="document"/>
<connect from_op="Filter Tokens (by Length)" from_port="document" to_op="Tokenize" to_port="document"/>
<connect from_op="Tokenize" from_port="document" to_port="document 1"/>
<portSpacing port="source_document" spacing="0"/>
<portSpacing port="sink_document 1" spacing="0"/>
<portSpacing port="sink_document 2" spacing="0"/>
</process>
</operator>
<connect from_op="Process Documents from Files" from_port="example set" to_port="result 1"/>
<connect from_op="Process Documents from Files" from_port="word list" to_port="result 2"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
<portSpacing port="sink_result 3" spacing="0"/>
</process>
</operator>
</process>
Please, if anyone can help me figure out if there's any problem, I would appreciate it! I'm running out of time and options. Thanks!
I'm working on my MA dissertation and I'm having trouble getting results from some of the operators. I'm extracting terminology from .txt files using the Process Documents from Files Operator. Within it, I used the sub-processes of Tokenize (non-letters), Stopwords (English), Transform Cases (lower cases), Filter Tokens by length, filter tokens by POS Tags in English (here the expression: FW.*|JJ.*|JJR.*|JJS.*|NN.*|NNS.*|RB.*|RP.*|VB.*|VBD.*|VBP.*|VBZ.*|VBG.*|VBN.*) and Generate up to 5 n-grams. Once I execute the process, there are no compound words and I don't know where the POS Tags should appear, because I don't see any tags anywhere. Here, the code:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.3.015">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="5.3.015" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="text:process_document_from_file" compatibility="5.3.002" expanded="true" height="76" name="Process Documents from Files" width="90" x="112" y="30">
<list key="text_directories">
<parameter key="hack" value="C:\Users\Dya\Documents\Univ. Catolica\Semestre 4\Tesis\Tesis 2014\Capitulos\Metodologia\Corpus\Corpus RapidMiner\Txt\Hack"/>
</list>
<parameter key="use_file_extension_as_type" value="false"/>
<parameter key="keep_text" value="true"/>
<process expanded="true">
<operator activated="true" class="text:filter_stopwords_english" compatibility="5.3.002" expanded="true" height="60" name="Filter Stopwords (English)" width="90" x="179" y="30"/>
<operator activated="true" class="text:transform_cases" compatibility="5.3.002" expanded="true" height="60" name="Transform Cases" width="90" x="179" y="165"/>
<operator activated="true" class="text:filter_tokens_by_pos" compatibility="5.3.002" expanded="true" height="60" name="Filter Tokens (by POS Tags)" width="90" x="313" y="30">
<parameter key="expression" value="FW.*|JJ.*|JJR.*|JJS.*|NN.*|NNS.*|RB.*|RP.*|VB.*|VBD.*|VBP.*|VBZ.*|VBG.*|VBN.*"/>
</operator>
<operator activated="true" class="text:generate_n_grams_terms" compatibility="5.3.002" expanded="true" height="60" name="Generate n-Grams (Terms)" width="90" x="313" y="165"/>
<operator activated="true" class="text:filter_by_length" compatibility="5.3.002" expanded="true" height="60" name="Filter Tokens (by Length)" width="90" x="447" y="30">
<parameter key="min_chars" value="3"/>
<parameter key="max_chars" value="20"/>
</operator>
<operator activated="true" class="text:tokenize" compatibility="5.3.002" expanded="true" height="60" name="Tokenize" width="90" x="447" y="165"/>
<connect from_port="document" to_op="Filter Stopwords (English)" to_port="document"/>
<connect from_op="Filter Stopwords (English)" from_port="document" to_op="Transform Cases" to_port="document"/>
<connect from_op="Transform Cases" from_port="document" to_op="Filter Tokens (by POS Tags)" to_port="document"/>
<connect from_op="Filter Tokens (by POS Tags)" from_port="document" to_op="Generate n-Grams (Terms)" to_port="document"/>
<connect from_op="Generate n-Grams (Terms)" from_port="document" to_op="Filter Tokens (by Length)" to_port="document"/>
<connect from_op="Filter Tokens (by Length)" from_port="document" to_op="Tokenize" to_port="document"/>
<connect from_op="Tokenize" from_port="document" to_port="document 1"/>
<portSpacing port="source_document" spacing="0"/>
<portSpacing port="sink_document 1" spacing="0"/>
<portSpacing port="sink_document 2" spacing="0"/>
</process>
</operator>
<connect from_op="Process Documents from Files" from_port="example set" to_port="result 1"/>
<connect from_op="Process Documents from Files" from_port="word list" to_port="result 2"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
<portSpacing port="sink_result 3" spacing="0"/>
</process>
</operator>
</process>
Please, if anyone can help me figure out if there's any problem, I would appreciate it! I'm running out of time and options. Thanks!
0
Answers
Move the tokenize operator to the beginning of the chain inside the Process Documents operator
regards
Andrew
Thanks! I tried it that way, and I get compound words, though I had to split my texts into two different processes because apparently it exceeded the memory capacity. Because of that same reason I can't export the results into an Excel sheet, so I guess I'll have to read and extract the desired results directly from the Software.
However, I still don't get the POS Tags. Any idea?
Best regards,
Dya
If you want to create a flag for them you could loop each value:
FW.*
JJ.*
JJR.*
etc..
In the loop you can then use Filter Tokens (by POS Tags) where you only filter the current looped value (for example JJ.*) and then create a flag for that loop (Generate attributes: JJ = 1).
There may be an easier way, but that should work.
Can you elaborate a little bit more on how to flag words with POS Tags? In the Filter Tokens (by POS Tags) I indicated the expression (FW*, NN*, etc.). However, I don't know how to link those tags with my extracted words. I've been trying to fill in the attribute name and corresponding function expression in the Generate Attributes operator, but it doesn't recognize the attributes. So, I'm a little bit lost here as it is the first time I use this operator.
Thanks!
You will need to output a separate word list for each POS tag (with a label you create for them) and then append to get your full data set.
Here is an example process:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="6.1.000">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="6.1.000" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="generate_data_user_specification" compatibility="6.1.000" expanded="true" height="60" name="Generate Data by User Specification" width="90" x="112" y="30">
<list key="attribute_values">
<parameter key="VB.*" value="1"/>
<parameter key="FW.*" value="1"/>
<parameter key="JJ.*" value="1"/>
<parameter key="NNS.*" value="1"/>
</list>
<list key="set_additional_roles"/>
</operator>
<operator activated="true" class="loop_attributes" compatibility="6.1.000" expanded="true" height="94" name="Loop Attributes" width="90" x="246" y="30">
<process expanded="true">
<operator activated="true" class="text:create_document" compatibility="6.1.000" expanded="true" height="60" name="Create Document" width="90" x="45" y="165">
<parameter key="text" value="this is test data for which to parse out part of speech tags after it has been created"/>
</operator>
<operator activated="true" class="text:filter_tokens_by_pos" compatibility="6.1.000" expanded="true" height="60" name="Filter Tokens (by POS Tags)" width="90" x="179" y="165">
<parameter key="expression" value="%{loop_attribute}"/>
</operator>
<operator activated="true" class="text:process_documents" compatibility="6.1.000" expanded="true" height="94" name="Process Documents" width="90" x="313" y="165">
<parameter key="create_word_vector" value="false"/>
<parameter key="keep_text" value="true"/>
<process expanded="true">
<operator activated="true" class="text:tokenize" compatibility="6.1.000" expanded="true" height="60" name="Tokenize" width="90" x="112" y="30"/>
<connect from_port="document" to_op="Tokenize" to_port="document"/>
<connect from_op="Tokenize" from_port="document" to_port="document 1"/>
<portSpacing port="source_document" spacing="0"/>
<portSpacing port="sink_document 1" spacing="0"/>
<portSpacing port="sink_document 2" spacing="0"/>
</process>
</operator>
<operator activated="true" class="text:wordlist_to_data" compatibility="6.1.000" expanded="true" height="76" name="WordList to Data" width="90" x="447" y="165"/>
<operator activated="true" class="generate_attributes" compatibility="6.1.000" expanded="true" height="76" name="Generate Attributes" width="90" x="581" y="165">
<list key="function_descriptions">
<parameter key="WordType" value=""%{loop_attribute}""/>
</list>
</operator>
<connect from_op="Create Document" from_port="output" to_op="Filter Tokens (by POS Tags)" to_port="document"/>
<connect from_op="Filter Tokens (by POS Tags)" from_port="document" to_op="Process Documents" to_port="documents 1"/>
<connect from_op="Process Documents" from_port="word list" to_op="WordList to Data" to_port="word list"/>
<connect from_op="WordList to Data" from_port="example set" to_op="Generate Attributes" to_port="example set input"/>
<connect from_op="Generate Attributes" from_port="example set output" to_port="result 1"/>
<portSpacing port="source_example set" spacing="0"/>
<portSpacing port="sink_example set" spacing="0"/>
<portSpacing port="sink_result 1" spacing="36"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
<operator activated="true" class="append" compatibility="6.1.000" expanded="true" height="76" name="Append" width="90" x="380" y="30"/>
<connect from_op="Generate Data by User Specification" from_port="output" to_op="Loop Attributes" to_port="example set"/>
<connect from_op="Loop Attributes" from_port="result 1" to_op="Append" to_port="example set 1"/>
<connect from_op="Append" from_port="merged set" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
Hello
I paste the submitted process into the application.
But the process document operator is red. And disabled.
Can someone send me the rmp file from this process?
Thank you If anyone helps. I need very much
Thankful
Thankful
Sorry
If you are serious about POS and tagging I would recommend using the python NLTK package. It is much more robust than the build in POS options, and a whole lot faster also (developers, take this as a hint ;-))
Attached example is not exactly what you need, but there are plenty of examples to find on the internet on how to work with NLTK.
The sample is something I use myself a lot to seperate nouns from verbs, or look for combined strings (noun or verb phrases for instance) and it's pretty modular.
Hello. thank you very much
I run your code. The output is this way
Now, if I want to separate and display the attributes and constraints, how should I write it? I did not run anyway .. !!
Is it possible to say this too?
And
I want to emulate with the extraction and selection of pos tags and sentiment analysis by wordnet,
Are I able to connect to the wordnet operator after extraction, nouns and verbs and adverbs , adjectives?
Is it possible in Python coding?
Sorry i am a beginner.
Thank you
have a nice day