The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Investigate customer feedback

NicsonNicson Member Posts: 18 Learner III
edited December 2018 in Help

Hello everyone,

 

I want to start a new project and need your help.


My knowledge of Rapidminer is rather basic, so I am not an expert.

 

For my project I have the following assumptions:

I am a provider of a product/service and receive regular customer feedback in text form. Customers report only negative experiences, so sentiment analysis is not required.

 

In the feedback, the customer reports on one or more issues. Now all customer feedback should be examined for the mentioned problems. Beside already known problems also new (unknown) sample elements should be worked out.

I see the main difficulty in the fact that each customer may describe a problem differently. Of course, I am sure that there will be other challenges in implementation, especially for me.

 

Do you think this is possible, if so, what is the possible difficulty level and how do I start best?

 


I appreciate your answer/support.

Tagged:

Answers

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    Hi @Nicson - I moved this thread to "Getting Started" as it seemed like a more appropriate place for your question. :)

     

    So to me this is a classic text mining problem - you're trying to cluster customer feedback (natural text) into topics / categories.  So there is the traditional way using tokenization, n-grams, and so forth. And then there are the nifty new tools that the ever-resourceful @mschmitz has developed as part of his Operator Toolbox. I would begin by starting to understand text mining in RapidMiner (maybe start here) ; then I'd move on to the tools in Operator Toolbox.


    Scott

     

  • NicsonNicson Member Posts: 18 Learner III

    Thank you @sgenzer for moving this topic.

     

    I looked at the text processing and now I have the following idea:

     

    Input Feedback >[text processing: transform cases > tokenize > filter stopwords > n-grams (bigrams) > filter n-grams]

     

    Now I get a list of n-grams, which are included in the document (customer feedback). I have thought about carrying out this process with all the feedbacks. All collected n-grams should then be checked for similarity and grouped if necessary. I would then like to manually name these groups in my own category. Subsequently, these manually revised files are to be added by machine learning in order to assign the own categories automatically.

     

    Would this be a possible approach or is it going in the wrong direction?

     

    For me, it would be important to create a working basic model first and to optimize it later if necessary.

     

     

    Thank you

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    yes @Nicson this is exactly the approach I would take. You are very welcome to post your XML processes here in this thread as you go along and we can help when you get stuck.

     

    Good luck!

     

    Scott

     

  • NicsonNicson Member Posts: 18 Learner III

     

    <?xml version="1.0" encoding="UTF-8"?>

    -<process version="8.1.000">


    -<context>

    <input/>

    <output/>

    <macros/>

    </context>


    -<operator name="Process" expanded="true" compatibility="8.1.000" class="process" activated="true">

    <parameter value="init" key="logverbosity"/>

    <parameter value="2001" key="random_seed"/>

    <parameter value="never" key="send_mail"/>

    <parameter value="" key="notification_email"/>

    <parameter value="30" key="process_duration_for_mail"/>

    <parameter value="SYSTEM" key="encoding"/>


    -<process expanded="true">


    -<operator name="Process Documents from Files" expanded="true" compatibility="8.1.000" class="text:process_document_from_file" activated="false" y="34" x="45" width="90" height="82">


    -<list key="text_directories">

    <parameter value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Broken" key="Broken"/>

    <parameter value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Content" key="Content"/>

    </list>

    <parameter value="*.*" key="file_pattern"/>

    <parameter value="true" key="extract_text_only"/>

    <parameter value="true" key="use_file_extension_as_type"/>

    <parameter value="txt" key="content_type"/>

    <parameter value="SYSTEM" key="encoding"/>

    <parameter value="true" key="create_word_vector"/>

    <parameter value="TF-IDF" key="vector_creation"/>

    <parameter value="true" key="add_meta_information"/>

    <parameter value="true" key="keep_text"/>

    <parameter value="none" key="prune_method"/>

    <parameter value="3.0" key="prune_below_percent"/>

    <parameter value="30.0" key="prune_above_percent"/>

    <parameter value="0.05" key="prune_below_rank"/>

    <parameter value="0.95" key="prune_above_rank"/>

    <parameter value="double_sparse_array" key="datamanagement"/>

    <parameter value="auto" key="data_management"/>


    -<process expanded="true">


    -<operator name="Tokenize (3)" expanded="true" compatibility="8.1.000" class="text:tokenize" activated="true" y="34" x="45" width="90" height="68">

    <parameter value="non letters" key="mode"/>

    <parameter value=".:" key="characters"/>

    <parameter value="English" key="language"/>

    <parameter value="3" key="max_token_length"/>

    </operator>


    -<operator name="Transform Cases (3)" expanded="true" compatibility="8.1.000" class="text:transform_cases" activated="false" y="136" x="45" width="90" height="68">

    <parameter value="lower case" key="transform_to"/>

    </operator>


    -<operator name="Filter Stopwords (2)" expanded="true" compatibility="8.1.000" class="text:filter_stopwords_german" activated="false" y="238" x="45" width="90" height="68">

    <parameter value="Standard" key="stop_word_list"/>

    </operator>

    <operator name="Stem (2)" expanded="true" compatibility="8.1.000" class="text:stem_german" activated="false" y="340" x="45" width="90" height="68"/>


    -<operator name="Generate n-Grams (2)" expanded="true" compatibility="8.1.000" class="text:generate_n_grams_terms" activated="false" y="442" x="45" width="90" height="68">

    <parameter value="1" key="max_length"/>

    </operator>

    <operator name="Filter Stopwords (3)" expanded="true" compatibility="8.1.000" class="text:filter_stopwords_english" activated="false" y="238" x="179" width="90" height="68"/>

    <connect to_port="document" to_op="Tokenize (3)" from_port="document"/>

    <connect to_port="document 1" from_port="document" from_op="Tokenize (3)"/>

    <connect to_port="document" to_op="Filter Stopwords (2)" from_port="document" from_op="Transform Cases (3)"/>

    <connect to_port="document" to_op="Stem (2)" from_port="document" from_op="Filter Stopwords (2)"/>

    <connect to_port="document" to_op="Generate n-Grams (2)" from_port="document" from_op="Stem (2)"/>

    <portSpacing spacing="0" port="source_document"/>

    <portSpacing spacing="0" port="sink_document 1"/>

    <portSpacing spacing="0" port="sink_document 2"/>

    </process>

    </operator>


    -<operator name="Cross Validation" expanded="true" compatibility="8.1.000" class="concurrency:cross_validation" activated="false" y="34" x="246" width="90" height="145">

    <parameter value="false" key="split_on_batch_attribute"/>

    <parameter value="false" key="leave_one_out"/>

    <parameter value="10" key="number_of_folds"/>

    <parameter value="stratified sampling" key="sampling_type"/>

    <parameter value="false" key="use_local_random_seed"/>

    <parameter value="1992" key="local_random_seed"/>

    <parameter value="true" key="enable_parallel_execution"/>


    -<process expanded="true">


    -<operator name="SVM" expanded="true" compatibility="8.1.000" class="support_vector_machine" activated="true" y="34" x="112" width="90" height="124">

    <parameter value="dot" key="kernel_type"/>

    <parameter value="1.0" key="kernel_gamma"/>

    <parameter value="1.0" key="kernel_sigma1"/>

    <parameter value="0.0" key="kernel_sigma2"/>

    <parameter value="2.0" key="kernel_sigma3"/>

    <parameter value="1.0" key="kernel_shift"/>

    <parameter value="2.0" key="kernel_degree"/>

    <parameter value="1.0" key="kernel_a"/>

    <parameter value="0.0" key="kernel_b"/>

    <parameter value="200" key="kernel_cache"/>

    <parameter value="0.0" key="C"/>

    <parameter value="0.001" key="convergence_epsilon"/>

    <parameter value="100000" key="max_iterations"/>

    <parameter value="true" key="scale"/>

    <parameter value="true" key="calculate_weights"/>

    <parameter value="true" key="return_optimization_performance"/>

    <parameter value="1.0" key="L_pos"/>

    <parameter value="1.0" key="L_neg"/>

    <parameter value="0.0" key="epsilon"/>

    <parameter value="0.0" key="epsilon_plus"/>

    <parameter value="0.0" key="epsilon_minus"/>

    <parameter value="false" key="balance_cost"/>

    <parameter value="false" key="quadratic_loss_pos"/>

    <parameter value="false" key="quadratic_loss_neg"/>

    <parameter value="false" key="estimate_performance"/>

    </operator>

    <connect to_port="training set" to_op="SVM" from_port="training set"/>

    <connect to_port="model" from_port="model" from_op="SVM"/>

    <portSpacing spacing="0" port="source_training set"/>

    <portSpacing spacing="0" port="sink_model"/>

    <portSpacing spacing="0" port="sink_through 1"/>

    </process>


    -<process expanded="true">


    -<operator name="Apply Model" expanded="true" compatibility="8.1.000" class="apply_model" activated="true" y="34" x="45" width="90" height="82">

    <list key="application_parameters"/>

    <parameter value="false" key="create_view"/>

    </operator>


    -<operator name="Performance (2)" expanded="true" compatibility="8.1.000" class="performance" activated="true" y="34" x="179" width="90" height="82">

    <parameter value="true" key="use_example_weights"/>

    </operator>

    <connect to_port="model" to_op="Apply Model" from_port="model"/>

    <connect to_port="unlabelled data" to_op="Apply Model" from_port="test set"/>

    <connect to_port="labelled data" to_op="Performance (2)" from_port="labelled data" from_op="Apply Model"/>

    <connect to_port="performance 1" from_port="performance" from_op="Performance (2)"/>

    <portSpacing spacing="0" port="source_model"/>

    <portSpacing spacing="0" port="source_test set"/>

    <portSpacing spacing="0" port="source_through 1"/>

    <portSpacing spacing="0" port="sink_test set results"/>

    <portSpacing spacing="0" port="sink_performance 1"/>

    <portSpacing spacing="0" port="sink_performance 2"/>

    </process>

    <description width="126" colored="true" color="green" align="center">Learning Model<br/>(From Data-Files)<br></description>

    </operator>


    -<operator name="Store (2)" expanded="true" compatibility="8.1.000" class="store" activated="false" y="34" x="380" width="90" height="68">

    <parameter value="Model" key="repository_entry"/>

    </operator>


    -<operator name="Store" expanded="true" compatibility="8.1.000" class="store" activated="false" y="136" x="112" width="90" height="68">

    <parameter value="Wordlist" key="repository_entry"/>

    </operator>


    -<operator name="Retrieve (2)" expanded="true" compatibility="8.1.000" class="retrieve" activated="true" y="595" x="380" width="90" height="68">

    <parameter value="Model" key="repository_entry"/>

    </operator>


    -<operator name="Read Excel" expanded="true" compatibility="8.1.000" class="read_excel" activated="false" y="289" x="45" width="90" height="68">

    <parameter value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Classification\Classes.xlsx" key="excel_file"/>

    <parameter value="sheet number" key="sheet_selection"/>

    <parameter value="1" key="sheet_number"/>

    <parameter value="A1:B112" key="imported_cell_range"/>

    <parameter value="SYSTEM" key="encoding"/>

    <parameter value="false" key="first_row_as_names"/>


    -<list key="annotations">

    <parameter value="Name" key="0"/>

    </list>

    <parameter value="" key="date_format"/>

    <parameter value="SYSTEM" key="time_zone"/>

    <parameter value="English (United States)" key="locale"/>

    <parameter value="false" key="read_all_values_as_polynominal"/>


    -<list key="data_set_meta_data_information">

    <parameter value="Text.true.text.attribute" key="0"/>

    <parameter value="Class.true.text.label" key="1"/>

    </list>

    <parameter value="true" key="read_not_matching_values_as_missings"/>

    <parameter value="double_array" key="datamanagement"/>

    <parameter value="auto" key="data_management"/>

    </operator>


    -<operator name="Process Documents from Data" expanded="true" compatibility="8.1.000" class="text:process_document_from_data" activated="false" y="289" x="179" width="90" height="82">

    <parameter value="true" key="create_word_vector"/>

    <parameter value="TF-IDF" key="vector_creation"/>

    <parameter value="true" key="add_meta_information"/>

    <parameter value="false" key="keep_text"/>

    <parameter value="none" key="prune_method"/>

    <parameter value="3.0" key="prune_below_percent"/>

    <parameter value="30.0" key="prune_above_percent"/>

    <parameter value="0.05" key="prune_below_rank"/>

    <parameter value="0.95" key="prune_above_rank"/>

    <parameter value="double_sparse_array" key="datamanagement"/>

    <parameter value="auto" key="data_management"/>

    <parameter value="false" key="select_attributes_and_weights"/>


    -<list key="specify_weights">

    <parameter value="1.0" key="jkjk"/>

    </list>


    -<process expanded="true">


    -<operator name="Tokenize (4)" expanded="true" compatibility="8.1.000" class="text:tokenize" activated="true" y="34" x="45" width="90" height="68">

    <parameter value="non letters" key="mode"/>

    <parameter value=".:" key="characters"/>

    <parameter value="English" key="language"/>

    <parameter value="3" key="max_token_length"/>

    </operator>


    -<operator name="Transform Cases (4)" expanded="true" compatibility="8.1.000" class="text:transform_cases" activated="true" y="136" x="45" width="90" height="68">

    <parameter value="lower case" key="transform_to"/>

    </operator>


    -<operator name="Filter Stopwords (5)" expanded="true" compatibility="8.1.000" class="text:filter_stopwords_german" activated="true" y="238" x="45" width="90" height="68">

    <parameter value="Standard" key="stop_word_list"/>

    </operator>

    <operator name="Stem (Porter)" expanded="true" compatibility="8.1.000" class="text:stem_porter" activated="true" y="340" x="45" width="90" height="68"/>


    -<operator name="Generate n-Grams (4)" expanded="true" compatibility="8.1.000" class="text:generate_n_grams_terms" activated="true" y="442" x="45" width="90" height="68">

    <parameter value="3" key="max_length"/>

    </operator>


    -<operator name="Filter Tokens (by Length)" expanded="true" compatibility="8.1.000" class="text:filter_by_length" activated="true" y="442" x="246" width="90" height="68">

    <parameter value="2" key="min_chars"/>

    <parameter value="999" key="max_chars"/>

    </operator>

    <connect to_port="document" to_op="Tokenize (4)" from_port="document"/>

    <connect to_port="document" to_op="Transform Cases (4)" from_port="document" from_op="Tokenize (4)"/>

    <connect to_port="document" to_op="Filter Stopwords (5)" from_port="document" from_op="Transform Cases (4)"/>

    <connect to_port="document" to_op="Stem (Porter)" from_port="document" from_op="Filter Stopwords (5)"/>

    <connect to_port="document" to_op="Generate n-Grams (4)" from_port="document" from_op="Stem (Porter)"/>

    <connect to_port="document" to_op="Filter Tokens (by Length)" from_port="document" from_op="Generate n-Grams (4)"/>

    <connect to_port="document 1" from_port="document" from_op="Filter Tokens (by Length)"/>

    <portSpacing spacing="0" port="source_document"/>

    <portSpacing spacing="0" port="sink_document 1"/>

    <portSpacing spacing="0" port="sink_document 2"/>

    </process>

    </operator>


    -<operator name="Store (3)" expanded="true" compatibility="8.1.000" class="store" activated="false" y="391" x="246" width="90" height="68">

    <parameter value="Wordlist" key="repository_entry"/>

    </operator>


    -<operator name="Cross Validation (2)" expanded="true" compatibility="8.1.000" class="concurrency:cross_validation" activated="false" y="289" x="380" width="90" height="145">

    <parameter value="false" key="split_on_batch_attribute"/>

    <parameter value="false" key="leave_one_out"/>

    <parameter value="10" key="number_of_folds"/>

    <parameter value="stratified sampling" key="sampling_type"/>

    <parameter value="false" key="use_local_random_seed"/>

    <parameter value="1992" key="local_random_seed"/>

    <parameter value="true" key="enable_parallel_execution"/>


    -<process expanded="true">


    -<operator name="Polynominal by Binominal Classification" expanded="true" compatibility="8.1.000" class="polynomial_by_binomial_classification" activated="true" y="34" x="112" width="90" height="82">

    <parameter value="1 against all" key="classification_strategies"/>

    <parameter value="2.0" key="random_code_multiplicator"/>

    <parameter value="false" key="use_local_random_seed"/>

    <parameter value="1992" key="local_random_seed"/>


    -<process expanded="true">


    -<operator name="SVM (2)" expanded="true" compatibility="8.1.000" class="support_vector_machine" activated="false" y="187" x="313" width="90" height="124">

    <parameter value="dot" key="kernel_type"/>

    <parameter value="1.0" key="kernel_gamma"/>

    <parameter value="1.0" key="kernel_sigma1"/>

    <parameter value="0.0" key="kernel_sigma2"/>

    <parameter value="2.0" key="kernel_sigma3"/>

    <parameter value="1.0" key="kernel_shift"/>

    <parameter value="2.0" key="kernel_degree"/>

    <parameter value="1.0" key="kernel_a"/>

    <parameter value="0.0" key="kernel_b"/>

    <parameter value="200" key="kernel_cache"/>

    <parameter value="0.0" key="C"/>

    <parameter value="0.001" key="convergence_epsilon"/>

    <parameter value="100000" key="max_iterations"/>

    <parameter value="true" key="scale"/>

    <parameter value="true" key="calculate_weights"/>

    <parameter value="true" key="return_optimization_performance"/>

    <parameter value="1.0" key="L_pos"/>

    <parameter value="1.0" key="L_neg"/>

    <parameter value="0.0" key="epsilon"/>

    <parameter value="0.0" key="epsilon_plus"/>

    <parameter value="0.0" key="epsilon_minus"/>

    <parameter value="false" key="balance_cost"/>

    <parameter value="false" key="quadratic_loss_pos"/>

    <parameter value="false" key="quadratic_loss_neg"/>

    <parameter value="false" key="estimate_performance"/>

    </operator>


    -<operator name="Naive Bayes" expanded="true" compatibility="8.1.000" class="naive_bayes" activated="true" y="85" x="313" width="90" height="82">

    <parameter value="true" key="laplace_correction"/>

    </operator>

    <connect to_port="training set" to_op="Naive Bayes" from_port="training set"/>

    <connect to_port="model" from_port="model" from_op="Naive Bayes"/>

    <portSpacing spacing="0" port="source_training set"/>

    <portSpacing spacing="0" port="sink_model"/>

    </process>

    </operator>

    <connect to_port="training set" to_op="Polynominal by Binominal Classification" from_port="training set"/>

    <connect to_port="model" from_port="model" from_op="Polynominal by Binominal Classification"/>

    <portSpacing spacing="0" port="source_training set"/>

    <portSpacing spacing="0" port="sink_model"/>

    <portSpacing spacing="0" port="sink_through 1"/>

    </process>


    -<process expanded="true">


    -<operator name="Apply Model (3)" expanded="true" compatibility="8.1.000" class="apply_model" activated="true" y="34" x="45" width="90" height="82">

    <list key="application_parameters"/>

    <parameter value="false" key="create_view"/>

    </operator>


    -<operator name="Performance (3)" expanded="true" compatibility="8.1.000" class="performance" activated="true" y="34" x="179" width="90" height="82">

    <parameter value="true" key="use_example_weights"/>

    </operator>

    <connect to_port="model" to_op="Apply Model (3)" from_port="model"/>

    <connect to_port="unlabelled data" to_op="Apply Model (3)" from_port="test set"/>

    <connect to_port="labelled data" to_op="Performance (3)" from_port="labelled data" from_op="Apply Model (3)"/>

    <connect to_port="performance 1" from_port="performance" from_op="Performance (3)"/>

    <portSpacing spacing="0" port="source_model"/>

    <portSpacing spacing="0" port="source_test set"/>

    <portSpacing spacing="0" port="source_through 1"/>

    <portSpacing spacing="0" port="sink_test set results"/>

    <portSpacing spacing="0" port="sink_performance 1"/>

    <portSpacing spacing="0" port="sink_performance 2"/>

    </process>

    <description width="126" colored="true" color="green" align="center">Learning Model<br> (From Excel)</description>

    </operator>


    -<operator name="Store (4)" expanded="true" compatibility="8.1.000" class="store" activated="false" y="238" x="514" width="90" height="68">

    <parameter value="Model" key="repository_entry"/>

    </operator>


    -<operator name="Read Excel (2)" expanded="true" compatibility="8.1.000" class="read_excel" activated="true" y="748" x="112" width="90" height="68">

    <parameter value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Source\180225.xlsx" key="excel_file"/>

    <parameter value="sheet number" key="sheet_selection"/>

    <parameter value="1" key="sheet_number"/>

    <parameter value="A1:B200" key="imported_cell_range"/>

    <parameter value="SYSTEM" key="encoding"/>

    <parameter value="false" key="first_row_as_names"/>


    -<list key="annotations">

    <parameter value="Name" key="0"/>

    </list>

    <parameter value="" key="date_format"/>

    <parameter value="SYSTEM" key="time_zone"/>

    <parameter value="English (United States)" key="locale"/>

    <parameter value="false" key="read_all_values_as_polynominal"/>


    -<list key="data_set_meta_data_information">

    <parameter value="Verbatim.true.text.attribute" key="0"/>

    <parameter value="Class.true.attribute_value.label" key="1"/>

    </list>

    <parameter value="true" key="read_not_matching_values_as_missings"/>

    <parameter value="double_array" key="datamanagement"/>

    <parameter value="auto" key="data_management"/>

    </operator>


    -<operator name="Retrieve (3)" expanded="true" compatibility="8.1.000" class="retrieve" activated="true" y="646" x="45" width="90" height="68">

    <parameter value="Wordlist" key="repository_entry"/>

    </operator>


    -<operator name="Process Documents from Data (2)" expanded="true" compatibility="8.1.000" class="text:process_document_from_data" activated="true" y="697" x="246" width="90" height="82">

    <parameter value="true" key="create_word_vector"/>

    <parameter value="Term Frequency" key="vector_creation"/>

    <parameter value="true" key="add_meta_information"/>

    <parameter value="true" key="keep_text"/>

    <parameter value="percentual" key="prune_method"/>

    <parameter value="5.0" key="prune_below_percent"/>

    <parameter value="95.0" key="prune_above_percent"/>

    <parameter value="0.05" key="prune_below_rank"/>

    <parameter value="0.95" key="prune_above_rank"/>

    <parameter value="double_sparse_array" key="datamanagement"/>

    <parameter value="auto" key="data_management"/>

    <parameter value="false" key="select_attributes_and_weights"/>


    -<list key="specify_weights">

    <parameter value="1.0" key="jkjk"/>

    </list>


    -<process expanded="true">


    -<operator name="Tokenize (5)" expanded="true" compatibility="8.1.000" class="text:tokenize" activated="true" y="34" x="45" width="90" height="68">

    <parameter value="non letters" key="mode"/>

    <parameter value=".:" key="characters"/>

    <parameter value="English" key="language"/>

    <parameter value="3" key="max_token_length"/>

    </operator>


    -<operator name="Transform Cases (5)" expanded="true" compatibility="8.1.000" class="text:transform_cases" activated="true" y="136" x="45" width="90" height="68">

    <parameter value="lower case" key="transform_to"/>

    </operator>


    -<operator name="Filter Stopwords (6)" expanded="true" compatibility="8.1.000" class="text:filter_stopwords_german" activated="true" y="238" x="45" width="90" height="68">

    <parameter value="Standard" key="stop_word_list"/>

    </operator>

    <operator name="Stem (4)" expanded="true" compatibility="8.1.000" class="text:stem_porter" activated="true" y="340" x="45" width="90" height="68"/>


    -<operator name="Generate n-Grams (5)" expanded="true" compatibility="8.1.000" class="text:generate_n_grams_terms" activated="true" y="442" x="45" width="90" height="68">

    <parameter value="3" key="max_length"/>

    </operator>


    -<operator name="Filter Tokens (2)" expanded="true" compatibility="8.1.000" class="text:filter_by_length" activated="true" y="442" x="246" width="90" height="68">

    <parameter value="2" key="min_chars"/>

    <parameter value="999" key="max_chars"/>

    </operator>

    <connect to_port="document" to_op="Tokenize (5)" from_port="document"/>

    <connect to_port="document" to_op="Transform Cases (5)" from_port="document" from_op="Tokenize (5)"/>

    <connect to_port="document" to_op="Filter Stopwords (6)" from_port="document" from_op="Transform Cases (5)"/>

    <connect to_port="document" to_op="Stem (4)" from_port="document" from_op="Filter Stopwords (6)"/>

    <connect to_port="document" to_op="Generate n-Grams (5)" from_port="document" from_op="Stem (4)"/>

    <connect to_port="document" to_op="Filter Tokens (2)" from_port="document" from_op="Generate n-Grams (5)"/>

    <connect to_port="document 1" from_port="document" from_op="Filter Tokens (2)"/>

    <portSpacing spacing="0" port="source_document"/>

    <portSpacing spacing="0" port="sink_document 1"/>

    <portSpacing spacing="0" port="sink_document 2"/>

    </process>

    </operator>


    -<operator name="Apply Model (2)" expanded="true" compatibility="8.1.000" class="apply_model" activated="true" y="646" x="514" width="90" height="82">

    <list key="application_parameters"/>

    <parameter value="false" key="create_view"/>

    <description width="126" colored="true" color="green" align="center">Apply Model<br>(Real Data)</description>

    </operator>

    <connect to_port="example set" to_op="Cross Validation" from_port="example set" from_op="Process Documents from Files"/>

    <connect to_port="input" to_op="Store" from_port="word list" from_op="Process Documents from Files"/>

    <connect to_port="input" to_op="Store (2)" from_port="model" from_op="Cross Validation"/>

    <connect to_port="model" to_op="Apply Model (2)" from_port="output" from_op="Retrieve (2)"/>

    <connect to_port="example set" to_op="Process Documents from Data" from_port="output" from_op="Read Excel"/>

    <connect to_port="example set" to_op="Cross Validation (2)" from_port="example set" from_op="Process Documents from Data"/>

    <connect to_port="input" to_op="Store (3)" from_port="word list" from_op="Process Documents from Data"/>

    <connect to_port="input" to_op="Store (4)" from_port="model" from_op="Cross Validation (2)"/>

    <connect to_port="example set" to_op="Process Documents from Data (2)" from_port="output" from_op="Read Excel (2)"/>

    <connect to_port="word list" to_op="Process Documents from Data (2)" from_port="output" from_op="Retrieve (3)"/>

    <connect to_port="unlabelled data" to_op="Apply Model (2)" from_port="example set" from_op="Process Documents from Data (2)"/>

    <connect to_port="result 1" from_port="labelled data" from_op="Apply Model (2)"/>

    <portSpacing spacing="0" port="source_input 1"/>

    <portSpacing spacing="0" port="sink_result 1"/>

    <portSpacing spacing="0" port="sink_result 2"/>

    </process>

    </operator>

    </process>

     

    This is what I have come up with after looking at some tutorials. @sgenzer

     

    As data input I use a two-column Excel file, it is structured as follows:

    • Column A: Customer feedback
    • Column B: Manual classification (label)

    I then send this data through the "Training Model" and save the wordlist and the model.
    In the last process I take real (unclassified data) according to the same principle and have them assigned.

    What do you think of this solution?

     

    Further questions would be:

    • How can I assign several classifications to a customer feedback, if this is the case? (Maybe per convergence level?)
    • How can I increase the general accuracy of classification? (how should good training data look like?


    I am very grateful for further feedback.

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson - there's something very weird with your XML. Can you please re-post?

     

    Scott

     

  • NicsonNicson Member Posts: 18 Learner III

    I hope it works this time. @sgenzer

     

    <?xml version="1.0" encoding="UTF-8"?><process version="8.1.000">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="8.1.000" expanded="true" name="Process">
    <parameter key="logverbosity" value="init"/>
    <parameter key="random_seed" value="2001"/>
    <parameter key="send_mail" value="never"/>
    <parameter key="notification_email" value=""/>
    <parameter key="process_duration_for_mail" value="30"/>
    <parameter key="encoding" value="SYSTEM"/>
    <process expanded="true">
    <operator activated="false" class="text:process_document_from_file" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Files" width="90" x="45" y="34">
    <list key="text_directories">
    <parameter key="Broken" value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Broken"/>
    <parameter key="Content" value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Content"/>
    </list>
    <parameter key="file_pattern" value="*.*"/>
    <parameter key="extract_text_only" value="true"/>
    <parameter key="use_file_extension_as_type" value="true"/>
    <parameter key="content_type" value="txt"/>
    <parameter key="encoding" value="SYSTEM"/>
    <parameter key="create_word_vector" value="true"/>
    <parameter key="vector_creation" value="TF-IDF"/>
    <parameter key="add_meta_information" value="true"/>
    <parameter key="keep_text" value="true"/>
    <parameter key="prune_method" value="none"/>
    <parameter key="prune_below_percent" value="3.0"/>
    <parameter key="prune_above_percent" value="30.0"/>
    <parameter key="prune_below_rank" value="0.05"/>
    <parameter key="prune_above_rank" value="0.95"/>
    <parameter key="datamanagement" value="double_sparse_array"/>
    <parameter key="data_management" value="auto"/>
    <process expanded="true">
    <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize (3)" width="90" x="45" y="34">
    <parameter key="mode" value="non letters"/>
    <parameter key="characters" value=".:"/>
    <parameter key="language" value="English"/>
    <parameter key="max_token_length" value="3"/>
    </operator>
    <operator activated="false" class="text:transform_cases" compatibility="8.1.000" expanded="true" height="68" name="Transform Cases (3)" width="90" x="45" y="136">
    <parameter key="transform_to" value="lower case"/>
    </operator>
    <operator activated="false" class="text:filter_stopwords_german" compatibility="8.1.000" expanded="true" height="68" name="Filter Stopwords (2)" width="90" x="45" y="238">
    <parameter key="stop_word_list" value="Standard"/>
    </operator>
    <operator activated="false" class="text:stem_german" compatibility="8.1.000" expanded="true" height="68" name="Stem (2)" width="90" x="45" y="340"/>
    <operator activated="false" class="text:generate_n_grams_terms" compatibility="8.1.000" expanded="true" height="68" name="Generate n-Grams (2)" width="90" x="45" y="442">
    <parameter key="max_length" value="1"/>
    </operator>
    <operator activated="false" class="text:filter_stopwords_english" compatibility="8.1.000" expanded="true" height="68" name="Filter Stopwords (3)" width="90" x="179" y="238"/>
    <connect from_port="document" to_op="Tokenize (3)" to_port="document"/>
    <connect from_op="Tokenize (3)" from_port="document" to_port="document 1"/>
    <connect from_op="Transform Cases (3)" from_port="document" to_op="Filter Stopwords (2)" to_port="document"/>
    <connect from_op="Filter Stopwords (2)" from_port="document" to_op="Stem (2)" to_port="document"/>
    <connect from_op="Stem (2)" from_port="document" to_op="Generate n-Grams (2)" to_port="document"/>
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <portSpacing port="sink_document 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="false" class="concurrency:cross_validation" compatibility="8.1.000" expanded="true" height="145" name="Cross Validation" width="90" x="246" y="34">
    <parameter key="split_on_batch_attribute" value="false"/>
    <parameter key="leave_one_out" value="false"/>
    <parameter key="number_of_folds" value="10"/>
    <parameter key="sampling_type" value="stratified sampling"/>
    <parameter key="use_local_random_seed" value="false"/>
    <parameter key="local_random_seed" value="1992"/>
    <parameter key="enable_parallel_execution" value="true"/>
    <process expanded="true">
    <operator activated="true" class="support_vector_machine" compatibility="8.1.000" expanded="true" height="124" name="SVM" width="90" x="112" y="34">
    <parameter key="kernel_type" value="dot"/>
    <parameter key="kernel_gamma" value="1.0"/>
    <parameter key="kernel_sigma1" value="1.0"/>
    <parameter key="kernel_sigma2" value="0.0"/>
    <parameter key="kernel_sigma3" value="2.0"/>
    <parameter key="kernel_shift" value="1.0"/>
    <parameter key="kernel_degree" value="2.0"/>
    <parameter key="kernel_a" value="1.0"/>
    <parameter key="kernel_b" value="0.0"/>
    <parameter key="kernel_cache" value="200"/>
    <parameter key="C" value="0.0"/>
    <parameter key="convergence_epsilon" value="0.001"/>
    <parameter key="max_iterations" value="100000"/>
    <parameter key="scale" value="true"/>
    <parameter key="calculate_weights" value="true"/>
    <parameter key="return_optimization_performance" value="true"/>
    <parameter key="L_pos" value="1.0"/>
    <parameter key="L_neg" value="1.0"/>
    <parameter key="epsilon" value="0.0"/>
    <parameter key="epsilon_plus" value="0.0"/>
    <parameter key="epsilon_minus" value="0.0"/>
    <parameter key="balance_cost" value="false"/>
    <parameter key="quadratic_loss_pos" value="false"/>
    <parameter key="quadratic_loss_neg" value="false"/>
    <parameter key="estimate_performance" value="false"/>
    </operator>
    <connect from_port="training set" to_op="SVM" to_port="training set"/>
    <connect from_op="SVM" from_port="model" to_port="model"/>
    <portSpacing port="source_training set" spacing="0"/>
    <portSpacing port="sink_model" spacing="0"/>
    <portSpacing port="sink_through 1" spacing="0"/>
    </process>
    <process expanded="true">
    <operator activated="true" class="apply_model" compatibility="8.1.000" expanded="true" height="82" name="Apply Model" width="90" x="45" y="34">
    <list key="application_parameters"/>
    <parameter key="create_view" value="false"/>
    </operator>
    <operator activated="true" class="performance" compatibility="8.1.000" expanded="true" height="82" name="Performance (2)" width="90" x="179" y="34">
    <parameter key="use_example_weights" value="true"/>
    </operator>
    <connect from_port="model" to_op="Apply Model" to_port="model"/>
    <connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/>
    <connect from_op="Apply Model" from_port="labelled data" to_op="Performance (2)" to_port="labelled data"/>
    <connect from_op="Performance (2)" from_port="performance" to_port="performance 1"/>
    <portSpacing port="source_model" spacing="0"/>
    <portSpacing port="source_test set" spacing="0"/>
    <portSpacing port="source_through 1" spacing="0"/>
    <portSpacing port="sink_test set results" spacing="0"/>
    <portSpacing port="sink_performance 1" spacing="0"/>
    <portSpacing port="sink_performance 2" spacing="0"/>
    </process>
    <description align="center" color="green" colored="true" width="126">Learning Model&lt;br/&gt;(From Data-Files)&lt;br&gt;</description>
    </operator>
    <operator activated="false" class="store" compatibility="8.1.000" expanded="true" height="68" name="Store (2)" width="90" x="380" y="34">
    <parameter key="repository_entry" value="Model"/>
    </operator>
    <operator activated="false" class="store" compatibility="8.1.000" expanded="true" height="68" name="Store" width="90" x="112" y="136">
    <parameter key="repository_entry" value="Wordlist"/>
    </operator>
    <operator activated="true" class="retrieve" compatibility="8.1.000" expanded="true" height="68" name="Retrieve (2)" width="90" x="380" y="595">
    <parameter key="repository_entry" value="Model"/>
    </operator>
    <operator activated="false" class="read_excel" compatibility="8.1.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="289">
    <parameter key="excel_file" value="C:\Users\Nicson\Documents\Rapidminer\02_Apply_Model\Classification\Classes.xlsx"/>
    <parameter key="sheet_selection" value="sheet number"/>
    <parameter key="sheet_number" value="1"/>
    <parameter key="imported_cell_range" value="A1:B112"/>
    <parameter key="encoding" value="SYSTEM"/>
    <parameter key="first_row_as_names" value="false"/>
    <list key="annotations">
    <parameter key="0" value="Name"/>
    </list>
    <parameter key="date_format" value=""/>
    <parameter key="time_zone" value="SYSTEM"/>
    <parameter key="locale" value="English (United States)"/>
    <parameter key="read_all_values_as_polynominal" value="false"/>
    <list key="data_set_meta_data_information">
    <parameter key="0" value="Text.true.text.attribute"/>
    <parameter key="1" value="Class.true.text.label"/>
    </list>
    <parameter key="read_not_matching_values_as_missings" value="true"/>
    <parameter key="datamanagement" value="double_array"/>
    <parameter key="data_management" value="auto"/>
    </operator>
    <operator activated="false" class="text:process_document_from_data" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Data" width="90" x="179" y="289">
    <parameter key="create_word_vector" value="true"/>
    <parameter key="vector_creation" value="TF-IDF"/>
    <parameter key="add_meta_information" value="true"/>
    <parameter key="keep_text" value="false"/>
    <parameter key="prune_method" value="none"/>
    <parameter key="prune_below_percent" value="3.0"/>
    <parameter key="prune_above_percent" value="30.0"/>
    <parameter key="prune_below_rank" value="0.05"/>
    <parameter key="prune_above_rank" value="0.95"/>
    <parameter key="datamanagement" value="double_sparse_array"/>
    <parameter key="data_management" value="auto"/>
    <parameter key="select_attributes_and_weights" value="false"/>
    <list key="specify_weights">
    <parameter key="jkjk" value="1.0"/>
    </list>
    <process expanded="true">
    <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize (4)" width="90" x="45" y="34">
    <parameter key="mode" value="non letters"/>
    <parameter key="characters" value=".:"/>
    <parameter key="language" value="English"/>
    <parameter key="max_token_length" value="3"/>
    </operator>
    <operator activated="true" class="text:transform_cases" compatibility="8.1.000" expanded="true" height="68" name="Transform Cases (4)" width="90" x="45" y="136">
    <parameter key="transform_to" value="lower case"/>
    </operator>
    <operator activated="true" class="text:filter_stopwords_german" compatibility="8.1.000" expanded="true" height="68" name="Filter Stopwords (5)" width="90" x="45" y="238">
    <parameter key="stop_word_list" value="Standard"/>
    </operator>
    <operator activated="true" class="text:stem_porter" compatibility="8.1.000" expanded="true" height="68" name="Stem (Porter)" width="90" x="45" y="340"/>
    <operator activated="true" class="text:generate_n_grams_terms" compatibility="8.1.000" expanded="true" height="68" name="Generate n-Grams (4)" width="90" x="45" y="442">
    <parameter key="max_length" value="3"/>
    </operator>
    <operator activated="true" class="text:filter_by_length" compatibility="8.1.000" expanded="true" height="68" name="Filter Tokens (by Length)" width="90" x="246" y="442">
    <parameter key="min_chars" value="2"/>
    <parameter key="max_chars" value="999"/>
    </operator>
    <connect from_port="document" to_op="Tokenize (4)" to_port="document"/>
    <connect from_op="Tokenize (4)" from_port="document" to_op="Transform Cases (4)" to_port="document"/>
    <connect from_op="Transform Cases (4)" from_port="document" to_op="Filter Stopwords (5)" to_port="document"/>
    <connect from_op="Filter Stopwords (5)" from_port="document" to_op="Stem (Porter)" to_port="document"/>
    <connect from_op="Stem (Porter)" from_port="document" to_op="Generate n-Grams (4)" to_port="document"/>
    <connect from_op="Generate n-Grams (4)" from_port="document" to_op="Filter Tokens (by Length)" to_port="document"/>
    <connect from_op="Filter Tokens (by Length)" from_port="document" to_port="document 1"/>
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <portSpacing port="sink_document 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="false" class="store" compatibility="8.1.000" expanded="true" height="68" name="Store (3)" width="90" x="246" y="391">
    <parameter key="repository_entry" value="Wordlist"/>
    </operator>
    <operator activated="false" class="concurrency:cross_validation" compatibility="8.1.000" expanded="true" height="145" name="Cross Validation (2)" width="90" x="380" y="289">
    <parameter key="split_on_batch_attribute" value="false"/>
    <parameter key="leave_one_out" value="false"/>
    <parameter key="number_of_folds" value="10"/>
    <parameter key="sampling_type" value="stratified sampling"/>
    <parameter key="use_local_random_seed" value="false"/>
    <parameter key="local_random_seed" value="1992"/>
    <parameter key="enable_parallel_execution" value="true"/>
    <process expanded="true">
    <operator activated="true" class="polynomial_by_binomial_classification" compatibility="8.1.000" expanded="true" height="82" name="Polynominal by Binominal Classification" width="90" x="112" y="34">
    <parameter key="classification_strategies" value="1 against all"/>
    <parameter key="random_code_multiplicator" value="2.0"/>
    <parameter key="use_local_random_seed" value="false"/>
    <parameter key="local_random_seed" value="1992"/>
    <process expanded="true">
    <operator activated="false" class="support_vector_machine" compatibility="8.1.000" expanded="true" height="124" name="SVM (2)" width="90" x="313" y="187">
    <parameter key="kernel_type" value="dot"/>
    <parameter key="kernel_gamma" value="1.0"/>
    <parameter key="kernel_sigma1" value="1.0"/>
    <parameter key="kernel_sigma2" value="0.0"/>
    <parameter key="kernel_sigma3" value="2.0"/>
    <parameter key="kernel_shift" value="1.0"/>
    <parameter key="kernel_degree" value="2.0"/>
    <parameter key="kernel_a" value="1.0"/>
    <parameter key="kernel_b" value="0.0"/>
    <parameter key="kernel_cache" value="200"/>
    <parameter key="C" value="0.0"/>
    <parameter key="convergence_epsilon" value="0.001"/>
    <parameter key="max_iterations" value="100000"/>
    <parameter key="scale" value="true"/>
    <parameter key="calculate_weights" value="true"/>
    <parameter key="return_optimization_performance" value="true"/>
    <parameter key="L_pos" value="1.0"/>
    <parameter key="L_neg" value="1.0"/>
    <parameter key="epsilon" value="0.0"/>
    <parameter key="epsilon_plus" value="0.0"/>
    <parameter key="epsilon_minus" value="0.0"/>
    <parameter key="balance_cost" value="false"/>
    <parameter key="quadratic_loss_pos" value="false"/>
    <parameter key="quadratic_loss_neg" value="false"/>
    <parameter key="estimate_performance" value="false"/>
    </operator>
    <operator activated="true" class="naive_bayes" compatibility="8.1.000" expanded="true" height="82" name="Naive Bayes" width="90" x="313" y="85">
    <parameter key="laplace_correction" value="true"/>
    </operator>
    <connect from_port="training set" to_op="Naive Bayes" to_port="training set"/>
    <connect from_op="Naive Bayes" from_port="model" to_port="model"/>
    <portSpacing port="source_training set" spacing="0"/>
    <portSpacing port="sink_model" spacing="0"/>
    </process>
    </operator>
    <connect from_port="training set" to_op="Polynominal by Binominal Classification" to_port="training set"/>
    <connect from_op="Polynominal by Binominal Classification" from_port="model" to_port="model"/>
    <portSpacing port="source_training set" spacing="0"/>
    <portSpacing port="sink_model" spacing="0"/>
    <portSpacing port="sink_through 1" spacing="0"/>
    </process>
    <process expanded="true">
    <operator activated="true" class="apply_model" compatibility="8.1.000" expanded="true" height="82" name="Apply Model (3)" width="90" x="45" y="34">
    <list key="application_parameters"/>
    <parameter key="create_view" value="false"/>
    </operator>
    <operator activated="true" class="performance" compatibility="8.1.000" expanded="true" height="82" name="Performance (3)" width="90" x="179" y="34">
    <parameter key="use_example_weights" value="true"/>
    </operator>
    <connect from_port="model" to_op="Apply Model (3)" to_port="model"/>
    <connect from_port="test set" to_op="Apply Model (3)" to_port="unlabelled data"/>
    <connect from_op="Apply Model (3)" from_port="labelled data" to_op="Performance (3)" to_port="labelled data"/>
    <connect from_op="Performance (3)" from_port="performance" to_port="performance 1"/>
    <portSpacing port="source_model" spacing="0"/>
    <portSpacing port="source_test set" spacing="0"/>
    <portSpacing port="source_through 1" spacing="0"/>
    <portSpacing port="sink_test set results" spacing="0"/>
    <portSpacing port="sink_performance 1" spacing="0"/>
    <portSpacing port="sink_performance 2" spacing="0"/>
    </process>
    <description align="center" color="green" colored="true" width="126">Learning Model&lt;br&gt; (From Excel)</description>
    </operator>
    <operator activated="false" class="store" compatibility="8.1.000" expanded="true" height="68" name="Store (4)" width="90" x="514" y="238">
    <parameter key="repository_entry" value="Model"/>
    </operator>
    <operator activated="true" class="read_excel" compatibility="8.1.000" expanded="true" height="68" name="Read Excel (2)" width="90" x="112" y="748">
    <parameter key="excel_file" value="C:\Users\Nicsons\Documents\Rapidminer\02_Apply_Model\Source\180225.xlsx"/>
    <parameter key="sheet_selection" value="sheet number"/>
    <parameter key="sheet_number" value="1"/>
    <parameter key="imported_cell_range" value="A1:B200"/>
    <parameter key="encoding" value="SYSTEM"/>
    <parameter key="first_row_as_names" value="false"/>
    <list key="annotations">
    <parameter key="0" value="Name"/>
    </list>
    <parameter key="date_format" value=""/>
    <parameter key="time_zone" value="SYSTEM"/>
    <parameter key="locale" value="English (United States)"/>
    <parameter key="read_all_values_as_polynominal" value="false"/>
    <list key="data_set_meta_data_information">
    <parameter key="0" value="Verbatim.true.text.attribute"/>
    <parameter key="1" value="Class.true.attribute_value.label"/>
    </list>
    <parameter key="read_not_matching_values_as_missings" value="true"/>
    <parameter key="datamanagement" value="double_array"/>
    <parameter key="data_management" value="auto"/>
    </operator>
    <operator activated="true" class="retrieve" compatibility="8.1.000" expanded="true" height="68" name="Retrieve (3)" width="90" x="45" y="646">
    <parameter key="repository_entry" value="Wordlist"/>
    </operator>
    <operator activated="true" class="text:process_document_from_data" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Data (2)" width="90" x="246" y="697">
    <parameter key="create_word_vector" value="true"/>
    <parameter key="vector_creation" value="Term Frequency"/>
    <parameter key="add_meta_information" value="true"/>
    <parameter key="keep_text" value="true"/>
    <parameter key="prune_method" value="percentual"/>
    <parameter key="prune_below_percent" value="5.0"/>
    <parameter key="prune_above_percent" value="95.0"/>
    <parameter key="prune_below_rank" value="0.05"/>
    <parameter key="prune_above_rank" value="0.95"/>
    <parameter key="datamanagement" value="double_sparse_array"/>
    <parameter key="data_management" value="auto"/>
    <parameter key="select_attributes_and_weights" value="false"/>
    <list key="specify_weights">
    <parameter key="jkjk" value="1.0"/>
    </list>
    <process expanded="true">
    <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize (5)" width="90" x="45" y="34">
    <parameter key="mode" value="non letters"/>
    <parameter key="characters" value=".:"/>
    <parameter key="language" value="English"/>
    <parameter key="max_token_length" value="3"/>
    </operator>
    <operator activated="true" class="text:transform_cases" compatibility="8.1.000" expanded="true" height="68" name="Transform Cases (5)" width="90" x="45" y="136">
    <parameter key="transform_to" value="lower case"/>
    </operator>
    <operator activated="true" class="text:filter_stopwords_german" compatibility="8.1.000" expanded="true" height="68" name="Filter Stopwords (6)" width="90" x="45" y="238">
    <parameter key="stop_word_list" value="Standard"/>
    </operator>
    <operator activated="true" class="text:stem_porter" compatibility="8.1.000" expanded="true" height="68" name="Stem (4)" width="90" x="45" y="340"/>
    <operator activated="true" class="text:generate_n_grams_terms" compatibility="8.1.000" expanded="true" height="68" name="Generate n-Grams (5)" width="90" x="45" y="442">
    <parameter key="max_length" value="3"/>
    </operator>
    <operator activated="true" class="text:filter_by_length" compatibility="8.1.000" expanded="true" height="68" name="Filter Tokens (2)" width="90" x="246" y="442">
    <parameter key="min_chars" value="2"/>
    <parameter key="max_chars" value="999"/>
    </operator>
    <connect from_port="document" to_op="Tokenize (5)" to_port="document"/>
    <connect from_op="Tokenize (5)" from_port="document" to_op="Transform Cases (5)" to_port="document"/>
    <connect from_op="Transform Cases (5)" from_port="document" to_op="Filter Stopwords (6)" to_port="document"/>
    <connect from_op="Filter Stopwords (6)" from_port="document" to_op="Stem (4)" to_port="document"/>
    <connect from_op="Stem (4)" from_port="document" to_op="Generate n-Grams (5)" to_port="document"/>
    <connect from_op="Generate n-Grams (5)" from_port="document" to_op="Filter Tokens (2)" to_port="document"/>
    <connect from_op="Filter Tokens (2)" from_port="document" to_port="document 1"/>
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <portSpacing port="sink_document 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="true" class="apply_model" compatibility="8.1.000" expanded="true" height="82" name="Apply Model (2)" width="90" x="514" y="646">
    <list key="application_parameters"/>
    <parameter key="create_view" value="false"/>
    <description align="center" color="green" colored="true" width="126">Apply Model&lt;br&gt;(Real Data)</description>
    </operator>
    <connect from_op="Process Documents from Files" from_port="example set" to_op="Cross Validation" to_port="example set"/>
    <connect from_op="Process Documents from Files" from_port="word list" to_op="Store" to_port="input"/>
    <connect from_op="Cross Validation" from_port="model" to_op="Store (2)" to_port="input"/>
    <connect from_op="Retrieve (2)" from_port="output" to_op="Apply Model (2)" to_port="model"/>
    <connect from_op="Read Excel" from_port="output" to_op="Process Documents from Data" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="example set" to_op="Cross Validation (2)" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="word list" to_op="Store (3)" to_port="input"/>
    <connect from_op="Cross Validation (2)" from_port="model" to_op="Store (4)" to_port="input"/>
    <connect from_op="Read Excel (2)" from_port="output" to_op="Process Documents from Data (2)" to_port="example set"/>
    <connect from_op="Retrieve (3)" from_port="output" to_op="Process Documents from Data (2)" to_port="word list"/>
    <connect from_op="Process Documents from Data (2)" from_port="example set" to_op="Apply Model (2)" to_port="unlabelled data"/>
    <connect from_op="Apply Model (2)" from_port="labelled data" to_port="result 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    </process>
    </operator>
    </process>
  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson - yes that XML works and it all looks fine. To improve your model (particularly with TF-IDF), you really should do some feature selection. There are several tutorials on how to do this.  And of course optimizing your model (e.g. using Optimize Parameters) and trying different models may work well.  You could even try using AutoModel on the example set AFTER you've done the Process Documents from Data operator.

     

    Scott

     

     

     

  • NicsonNicson Member Posts: 18 Learner III

    After some time has passed, I would like to report back again. @sgenzer

     

    In the meantime I have read through a lot of things and understood how important a good data model is.
    That is why, as has already been mentioned, I have considered or carried out the following. I have processed a variety of test documents and extracted a list of words with the most important terms and n-grams. However, I have noticed that there are a lot of similar terms and have asked myself whether I can group them again in Rapidminer? I'm sure it works, but my attempts have all failed so far.
    I just want to check the terms in the list (without reference to the documents) for similarities and cluster them. What is the easiest way to do this?

     

    The next point would be the actual classification. The "basic framework" is already there. (If this works at all with this setup?) I've reworked my concept a little bit. I would like to check new documents in such a way that, after the occurrence of a word/n-grams (either directly coinciding or to a certain degree of similarity) this is indicated in a new column by True or False, for example. The problem is that I only have words/n-grams that stand for a certain category, not against it.

     

     

    Thank you very much

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson glad you're making progress. So for grouping similar terms, I generally use Replace Tokens (Dictionary) and choose one token to represent each grouping. I'm not sure I understand your second question very well. The TF-IDF will give you a value; if you want to convert this to true/false, you can simple create a threshold and convert.


    Scott

     

  • NicsonNicson Member Posts: 18 Learner III

    @sgenzer

    I might have said something wrong. However, the method you mentioned is generally not uninteresting for further projects. What I meant was, I extracted a list of frequent terms and n-grams. These are to be classified manually for machine learning. Now I wanted to "cluster" this list first. Terms such as "error" or "error_xy" should be grouped together so that I can manually assign them to a certain label in one step.

    Then I would like to check each new document for consistency or similarity of terms from the list.

     

    As an example:

    The Error list contains: (Error, Error_code, Error_xy,...)

    A new document has the following text:"I'm getting an error."

    In this case, the document gets the label "error" because there is a direct match.


    You know what I mean?

     

     

  • MartinLiebigMartinLiebig Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, RapidMiner Certified Expert, University Professor Posts: 3,533 RM Data Scientist

    Dear @Nicson,

    for your info - we've just added a new operator called Extract Topics (LDA) to operator toolbox. It is able to automatically detect topics for documents and returns the n-most important word per topic. The difference to scott's clustering approach is, that a document can be assigned to more than one topic.

    Tell me if you need this ASAP. We are finishing other operators at the moment so it could take a few days before this hits market place.

     

    Best,

    Martin

    - Sr. Director Data Solutions, Altair RapidMiner -
    Dortmund, Germany
  • NicsonNicson Member Posts: 18 Learner III

    Thank you @mschmitz for this information. I am not in a hurry with my project at the moment, so I can wait until then. Maybe @sgenzer has an further idea that I could try in the meantime.

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    nope. If I were you @Nicson, I would follow @mschmitz's lead. :)

     

    Scott

     

  • NicsonNicson Member Posts: 18 Learner III

    I will definitely do that @sgenzer :smileyhappy:


    In the meantime I have managed to process my training documents and create valid clusters. In the book "Predictive Aalytics and Data Mining" I became aware of another cluster example and was able to implement it successfully.

    I have now saved the output of the cluster operator (k-Medoid) as an Excel file and would like to rename the cluster labels to my specific classes. Afterwards I would like to apply it to the training model and apply it to unknown data.

     

    How do I have to integrate the classified data into the process? Do I have to process them again by text processing or how do I proceed?

     

    Thank you for your efforts.

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson can you please share your XML and data set?

     

     

  • NicsonNicson Member Posts: 18 Learner III

    Here's the XML code @sgenzer:

    <?xml version="1.0" encoding="UTF-8"?><process version="8.1.001">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="8.1.001" expanded="true" name="Process">
    <parameter key="logverbosity" value="init"/>
    <parameter key="random_seed" value="2001"/>
    <parameter key="send_mail" value="never"/>
    <parameter key="notification_email" value=""/>
    <parameter key="process_duration_for_mail" value="30"/>
    <parameter key="encoding" value="SYSTEM"/>
    <process expanded="true">
    <operator activated="true" class="read_excel" compatibility="8.1.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34">
    <parameter key="sheet_selection" value="sheet number"/>
    <parameter key="sheet_number" value="1"/>
    <parameter key="imported_cell_range" value="A1:A657"/>
    <parameter key="encoding" value="SYSTEM"/>
    <parameter key="first_row_as_names" value="false"/>
    <list key="annotations">
    <parameter key="0" value="Name"/>
    </list>
    <parameter key="date_format" value=""/>
    <parameter key="time_zone" value="SYSTEM"/>
    <parameter key="locale" value="English (United States)"/>
    <parameter key="read_all_values_as_polynominal" value="false"/>
    <list key="data_set_meta_data_information">
    <parameter key="0" value="Content.true.text.attribute"/>
    </list>
    <parameter key="read_not_matching_values_as_missings" value="true"/>
    <parameter key="datamanagement" value="double_array"/>
    <parameter key="data_management" value="auto"/>
    </operator>
    <operator activated="true" breakpoints="after" class="text:process_document_from_data" compatibility="8.1.000" expanded="true" height="82" name="Process Documents from Data" width="90" x="179" y="34">
    <parameter key="create_word_vector" value="true"/>
    <parameter key="vector_creation" value="TF-IDF"/>
    <parameter key="add_meta_information" value="true"/>
    <parameter key="keep_text" value="true"/>
    <parameter key="prune_method" value="absolute"/>
    <parameter key="prune_below_percent" value="10.0"/>
    <parameter key="prune_above_percent" value="100.0"/>
    <parameter key="prune_below_absolute" value="10"/>
    <parameter key="prune_above_absolute" value="9999"/>
    <parameter key="prune_below_rank" value="0.05"/>
    <parameter key="prune_above_rank" value="0.95"/>
    <parameter key="datamanagement" value="double_sparse_array"/>
    <parameter key="data_management" value="auto"/>
    <parameter key="select_attributes_and_weights" value="false"/>
    <list key="specify_weights"/>
    <process expanded="true">
    <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize (2)" width="90" x="45" y="34">
    <parameter key="mode" value="non letters"/>
    <parameter key="characters" value=".: "/>
    <parameter key="language" value="German"/>
    <parameter key="max_token_length" value="3"/>
    </operator>
    <operator activated="true" class="text:tokenize" compatibility="8.1.000" expanded="true" height="68" name="Tokenize" width="90" x="45" y="136">
    <parameter key="mode" value="linguistic sentences"/>
    <parameter key="characters" value=".:"/>
    <parameter key="language" value="German"/>
    <parameter key="max_token_length" value="3"/>
    </operator>
    <operator activated="true" class="text:filter_stopwords_german" compatibility="8.1.000" expanded="true" height="68" name="Filter Stopwords (German)" width="90" x="45" y="238">
    <parameter key="stop_word_list" value="Standard"/>
    </operator>
    <operator activated="true" class="text:filter_by_length" compatibility="8.1.000" expanded="true" height="68" name="Filter Tokens (2)" width="90" x="45" y="340">
    <parameter key="min_chars" value="3"/>
    <parameter key="max_chars" value="999"/>
    </operator>
    <operator activated="true" class="text:stem_snowball" compatibility="8.1.000" expanded="true" height="68" name="Stem (Snowball)" width="90" x="179" y="136">
    <parameter key="language" value="German"/>
    </operator>
    <operator activated="true" class="text:generate_n_grams_terms" compatibility="8.1.000" expanded="true" height="68" name="Generate n-Grams (Terms)" width="90" x="179" y="238">
    <parameter key="max_length" value="3"/>
    </operator>
    <operator activated="true" class="text:transform_cases" compatibility="8.1.000" expanded="true" height="68" name="Transform Cases" width="90" x="179" y="340">
    <parameter key="transform_to" value="lower case"/>
    </operator>
    <connect from_port="document" to_op="Tokenize (2)" to_port="document"/>
    <connect from_op="Tokenize (2)" from_port="document" to_op="Tokenize" to_port="document"/>
    <connect from_op="Tokenize" from_port="document" to_op="Filter Stopwords (German)" to_port="document"/>
    <connect from_op="Filter Stopwords (German)" from_port="document" to_op="Filter Tokens (2)" to_port="document"/>
    <connect from_op="Filter Tokens (2)" from_port="document" to_op="Stem (Snowball)" to_port="document"/>
    <connect from_op="Stem (Snowball)" from_port="document" to_op="Generate n-Grams (Terms)" to_port="document"/>
    <connect from_op="Generate n-Grams (Terms)" from_port="document" to_op="Transform Cases" to_port="document"/>
    <connect from_op="Transform Cases" from_port="document" to_port="document 1"/>
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <portSpacing port="sink_document 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="true" class="k_medoids" compatibility="8.1.001" expanded="true" height="82" name="Clustering (2)" width="90" x="380" y="85">
    <parameter key="add_cluster_attribute" value="true"/>
    <parameter key="add_as_label" value="true"/>
    <parameter key="remove_unlabeled" value="true"/>
    <parameter key="k" value="10"/>
    <parameter key="max_runs" value="10"/>
    <parameter key="max_optimization_steps" value="100"/>
    <parameter key="use_local_random_seed" value="false"/>
    <parameter key="local_random_seed" value="1992"/>
    <parameter key="measure_types" value="NumericalMeasures"/>
    <parameter key="mixed_measure" value="MixedEuclideanDistance"/>
    <parameter key="nominal_measure" value="NominalDistance"/>
    <parameter key="numerical_measure" value="EuclideanDistance"/>
    <parameter key="divergence" value="GeneralizedIDivergence"/>
    <parameter key="kernel_type" value="radial"/>
    <parameter key="kernel_gamma" value="1.0"/>
    <parameter key="kernel_sigma1" value="1.0"/>
    <parameter key="kernel_sigma2" value="0.0"/>
    <parameter key="kernel_sigma3" value="2.0"/>
    <parameter key="kernel_degree" value="3.0"/>
    <parameter key="kernel_shift" value="1.0"/>
    <parameter key="kernel_a" value="1.0"/>
    <parameter key="kernel_b" value="0.0"/>
    </operator>
    <operator activated="true" class="write_excel" compatibility="8.1.001" expanded="true" height="82" name="Write Excel" width="90" x="447" y="187">
    <parameter key="file_format" value="xlsx"/>
    <parameter key="encoding" value="SYSTEM"/>
    <parameter key="sheet_name" value="RapidMiner Data"/>
    <parameter key="date_format" value="yyyy-MM-dd HH:mm:ss"/>
    <parameter key="number_format" value="#.0"/>
    </operator>
    <connect from_op="Read Excel" from_port="output" to_op="Process Documents from Data" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="example set" to_op="Clustering (2)" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="word list" to_port="result 2"/>
    <connect from_op="Clustering (2)" from_port="cluster model" to_port="result 1"/>
    <connect from_op="Clustering (2)" from_port="clustered set" to_op="Write Excel" to_port="input"/>
    <connect from_op="Write Excel" from_port="through" to_port="result 3"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    <portSpacing port="sink_result 3" spacing="0"/>
    <portSpacing port="sink_result 4" spacing="0"/>
    </process>
    </operator>
    </process>

    Unfortunately, I can't publish the data set, because in this case I used internal data from my university.

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson - ok no problem. So I cannot test your process without data but I am attaching your process with some additional notes and operators so you can get the gist of where to go.  You do not need to do Process Documents again.

     

    <?xml version="1.0" encoding="UTF-8"?><process version="8.1.001">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="8.1.001" expanded="true" name="Process">
    <process expanded="true">
    <operator activated="true" class="read_excel" compatibility="8.1.000" expanded="true" height="68" name="Read Excel" width="90" x="45" y="34">
    <parameter key="imported_cell_range" value="A1:A657"/>
    <parameter key="first_row_as_names" value="false"/>
    <list key="annotations">
    <parameter key="0" value="Name"/>
    </list>
    <list key="data_set_meta_data_information">
    <parameter key="0" value="Content.true.text.attribute"/>
    </list>
    </operator>
    <operator activated="true" breakpoints="after" class="text:process_document_from_data" compatibility="7.5.000" expanded="true" height="82" name="Process Documents from Data" width="90" x="179" y="34">
    <parameter key="keep_text" value="true"/>
    <parameter key="prune_method" value="absolute"/>
    <parameter key="prune_below_percent" value="10.0"/>
    <parameter key="prune_above_percent" value="100.0"/>
    <parameter key="prune_below_absolute" value="10"/>
    <parameter key="prune_above_absolute" value="9999"/>
    <list key="specify_weights"/>
    <process expanded="true">
    <operator activated="true" class="text:tokenize" compatibility="7.5.000" expanded="true" height="68" name="Tokenize (2)" width="90" x="45" y="34">
    <parameter key="characters" value=".: "/>
    <parameter key="language" value="German"/>
    </operator>
    <operator activated="true" class="text:tokenize" compatibility="7.5.000" expanded="true" height="68" name="Tokenize" width="90" x="45" y="136">
    <parameter key="mode" value="linguistic sentences"/>
    <parameter key="language" value="German"/>
    </operator>
    <operator activated="true" class="text:filter_stopwords_german" compatibility="7.5.000" expanded="true" height="68" name="Filter Stopwords (German)" width="90" x="45" y="238"/>
    <operator activated="true" class="text:filter_by_length" compatibility="7.5.000" expanded="true" height="68" name="Filter Tokens (2)" width="90" x="45" y="340">
    <parameter key="min_chars" value="3"/>
    <parameter key="max_chars" value="999"/>
    </operator>
    <operator activated="true" class="text:stem_snowball" compatibility="7.5.000" expanded="true" height="68" name="Stem (Snowball)" width="90" x="179" y="136">
    <parameter key="language" value="German"/>
    </operator>
    <operator activated="true" class="text:generate_n_grams_terms" compatibility="7.5.000" expanded="true" height="68" name="Generate n-Grams (Terms)" width="90" x="179" y="238">
    <parameter key="max_length" value="3"/>
    </operator>
    <operator activated="true" class="text:transform_cases" compatibility="7.5.000" expanded="true" height="68" name="Transform Cases" width="90" x="179" y="340"/>
    <connect from_port="document" to_op="Tokenize (2)" to_port="document"/>
    <connect from_op="Tokenize (2)" from_port="document" to_op="Tokenize" to_port="document"/>
    <connect from_op="Tokenize" from_port="document" to_op="Filter Stopwords (German)" to_port="document"/>
    <connect from_op="Filter Stopwords (German)" from_port="document" to_op="Filter Tokens (2)" to_port="document"/>
    <connect from_op="Filter Tokens (2)" from_port="document" to_op="Stem (Snowball)" to_port="document"/>
    <connect from_op="Stem (Snowball)" from_port="document" to_op="Generate n-Grams (Terms)" to_port="document"/>
    <connect from_op="Generate n-Grams (Terms)" from_port="document" to_op="Transform Cases" to_port="document"/>
    <connect from_op="Transform Cases" from_port="document" to_port="document 1"/>
    <portSpacing port="source_document" spacing="0"/>
    <portSpacing port="sink_document 1" spacing="0"/>
    <portSpacing port="sink_document 2" spacing="0"/>
    </process>
    </operator>
    <operator activated="true" class="k_medoids" compatibility="8.1.001" expanded="true" height="82" name="Clustering (2)" width="90" x="313" y="136">
    <parameter key="add_as_label" value="true"/>
    <parameter key="remove_unlabeled" value="true"/>
    <parameter key="k" value="10"/>
    <parameter key="measure_types" value="NumericalMeasures"/>
    </operator>
    <operator activated="true" class="set_role" compatibility="8.1.001" expanded="true" height="82" name="Set Role" width="90" x="447" y="187">
    <parameter key="attribute_name" value="label"/>
    <parameter key="target_role" value="label"/>
    <list key="set_additional_roles"/>
    </operator>
    <operator activated="true" class="concurrency:cross_validation" compatibility="8.1.001" expanded="true" height="145" name="Cross Validation" width="90" x="581" y="187">
    <process expanded="true">
    <portSpacing port="source_training set" spacing="0"/>
    <portSpacing port="sink_model" spacing="0"/>
    <portSpacing port="sink_through 1" spacing="0"/>
    <description align="center" color="yellow" colored="false" height="105" resized="false" width="180" x="172" y="32">PUT YOUR MODEL HERE</description>
    </process>
    <process expanded="true">
    <operator activated="true" class="apply_model" compatibility="8.1.001" expanded="true" height="82" name="Apply Model" width="90" x="45" y="34">
    <list key="application_parameters"/>
    </operator>
    <connect from_port="model" to_op="Apply Model" to_port="model"/>
    <connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/>
    <portSpacing port="source_model" spacing="0"/>
    <portSpacing port="source_test set" spacing="0"/>
    <portSpacing port="source_through 1" spacing="0"/>
    <portSpacing port="sink_test set results" spacing="0"/>
    <portSpacing port="sink_performance 1" spacing="0"/>
    <portSpacing port="sink_performance 2" spacing="0"/>
    <description align="center" color="yellow" colored="false" height="105" resized="false" width="180" x="218" y="36">PUT YOUR PERFORMANCE OPERATOR HERE</description>
    </process>
    </operator>
    <connect from_op="Read Excel" from_port="output" to_op="Process Documents from Data" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="example set" to_op="Clustering (2)" to_port="example set"/>
    <connect from_op="Process Documents from Data" from_port="word list" to_port="result 2"/>
    <connect from_op="Clustering (2)" from_port="cluster model" to_port="result 1"/>
    <connect from_op="Clustering (2)" from_port="clustered set" to_op="Set Role" to_port="example set input"/>
    <connect from_op="Set Role" from_port="example set output" to_op="Cross Validation" to_port="example set"/>
    <connect from_op="Cross Validation" from_port="example set" to_port="result 3"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    <portSpacing port="sink_result 3" spacing="0"/>
    <portSpacing port="sink_result 4" spacing="0"/>
    <description align="center" color="yellow" colored="false" height="65" resized="true" width="471" x="249" y="440">AS A GENERAL RULE, YOU SHOULD SOME FEATURE SELECTION WITH THE OUTPUT OF TF-IDF AS IT PRODUCES A LOT OF SIMILAR VECTORS</description>
    </process>
    </operator>
    </process>

    Scott

  • NicsonNicson Member Posts: 18 Learner III

    Thank you @sgenzer

    What do you mean by feature selection, I can't really connect to it. (This is probably due to my lack of knowledge)

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hi @Nicson so feature selection is not one operator, it's a data science technique where you reduce the dimensionality of your data set in order to improve your model. 

     

    https://en.wikipedia.org/wiki/Feature_selection

     

    There are numerous operators in RapidMiner to help you do this:

     

    Screen Shot 2018-03-10 at 10.56.23 AM.pngScreen Shot 2018-03-10 at 10.56.18 AM.pngScreen Shot 2018-03-10 at 10.56.07 AM.png

     

    Scott

     

Sign In or Register to comment.