The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
How can I crawl more than one web page?
pix123
Member Posts: 27 Contributor II
Hi there, I am looking to collect the text data about a movie review, there are several pages of reviews and I would like to collect the first 10. I have set up a very basic web crawler as I want to get the data in txt data to do some text pre-processing and mining instead of crawling each time. However I only seem to pick up on the first page of reviews, please can you take a look and advise?
<?xml version="1.0" encoding="UTF-8"?><process version="9.0.003">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="6.0.002" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="web:crawl_web" compatibility="9.0.000" expanded="true" height="68" name="Crawl Web" width="90" x="112" y="75">
<parameter key="url" value="https://www.rottentomatoes.com/m/chef_2014/reviews/"/>
<list key="crawling_rules">
<parameter key="store_with_matching_url" value=".*chef_2014.*"/>
<parameter key="follow_link_with_matching_url" value=".*chef_2014.*"/>
</list>
<parameter key="output_dir" value="C:\rottentomatoes reviews & Clustering\Rapidminer Output"/>
<parameter key="max_pages" value="10"/>
<parameter key="max_depth" value="4"/>
<parameter key="max_page_size" value="1000"/>
<parameter key="user_agent" value="test"/>
</operator>
<connect from_op="Crawl Web" from_port="Example Set" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="6.0.002" expanded="true" name="Process">
<process expanded="true">
<operator activated="true" class="web:crawl_web" compatibility="9.0.000" expanded="true" height="68" name="Crawl Web" width="90" x="112" y="75">
<parameter key="url" value="https://www.rottentomatoes.com/m/chef_2014/reviews/"/>
<list key="crawling_rules">
<parameter key="store_with_matching_url" value=".*chef_2014.*"/>
<parameter key="follow_link_with_matching_url" value=".*chef_2014.*"/>
</list>
<parameter key="output_dir" value="C:\rottentomatoes reviews & Clustering\Rapidminer Output"/>
<parameter key="max_pages" value="10"/>
<parameter key="max_depth" value="4"/>
<parameter key="max_page_size" value="1000"/>
<parameter key="user_agent" value="test"/>
</operator>
<connect from_op="Crawl Web" from_port="Example Set" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
Tagged:
0
Comments
Here is a way of doing it that uses Get Page inside a loop, and that works just fine.
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
Scott
FYI, @sgenzer I did confirm with Helge that this is a bug with the Crawl Web operator. It looks like it is related to https pages (which is a shame since that is like 90% of the web these days).
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
It is taking me ages because of my current travel plans (guess what? more delays!), but I have plans to release it at some point in January.
- Exception: java.lang.NoClassDefFoundError
Please ignore - solved the problem, unrelated to new RM version<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="6.0.002" expanded="true" name="Process">
<parameter key="logverbosity" value="init"/>
<parameter key="random_seed" value="2001"/>
<parameter key="send_mail" value="never"/>
<parameter key="notification_email" value=""/>
<parameter key="process_duration_for_mail" value="30"/>
<parameter key="encoding" value="SYSTEM"/>
<process expanded="true">
<operator activated="true" class="concurrency:loop" compatibility="9.2.001" expanded="true" height="82" name="Loop" width="90" x="112" y="136">
<parameter key="number_of_iterations" value="2"/>
<parameter key="iteration_macro" value="iteration"/>
<parameter key="reuse_results" value="false"/>
<parameter key="enable_parallel_execution" value="false"/>
<process expanded="true">
<operator activated="true" class="read_csv" compatibility="9.2.001" expanded="true" height="68" name="Read CSV" width="90" x="112" y="187">
<parameter key="csv_file" value="C:\Users\Funk\Desktop\pages_rotten.txt"/>
<parameter key="column_separators" value=";"/>
<parameter key="trim_lines" value="false"/>
<parameter key="use_quotes" value="true"/>
<parameter key="quotes_character" value="""/>
<parameter key="escape_character" value="\"/>
<parameter key="skip_comments" value="true"/>
<parameter key="comment_characters" value="#"/>
<parameter key="starting_row" value="1"/>
<parameter key="parse_numbers" value="true"/>
<parameter key="decimal_character" value="."/>
<parameter key="grouped_digits" value="false"/>
<parameter key="grouping_character" value=","/>
<parameter key="infinity_representation" value=""/>
<parameter key="date_format" value=""/>
<parameter key="first_row_as_names" value="true"/>
<list key="annotations"/>
<parameter key="time_zone" value="SYSTEM"/>
<parameter key="locale" value="English (United States)"/>
<parameter key="encoding" value="windows-1252"/>
<parameter key="read_all_values_as_polynominal" value="false"/>
<list key="data_set_meta_data_information">
<parameter key="0" value="LINKS.true.polynominal.attribute"/>
</list>
<parameter key="read_not_matching_values_as_missings" value="false"/>
<parameter key="datamanagement" value="double_array"/>
<parameter key="data_management" value="auto"/>
</operator>
<operator activated="true" class="web:retrieve_webpages" compatibility="9.0.000" expanded="true" height="68" name="Get Pages" width="90" x="246" y="187">
<parameter key="link_attribute" value="LINKS"/>
<parameter key="random_user_agent" value="false"/>
<parameter key="connection_timeout" value="10000"/>
<parameter key="read_timeout" value="10000"/>
<parameter key="follow_redirects" value="true"/>
<parameter key="accept_cookies" value="none"/>
<parameter key="cookie_scope" value="global"/>
<parameter key="request_method" value="GET"/>
<parameter key="delay" value="none"/>
<parameter key="delay_amount" value="1000"/>
<parameter key="min_delay_amount" value="0"/>
<parameter key="max_delay_amount" value="1000"/>
</operator>
<operator activated="true" class="text:data_to_documents" compatibility="8.1.000" expanded="true" height="68" name="Data to Documents" width="90" x="380" y="187">
<parameter key="select_attributes_and_weights" value="false"/>
<list key="specify_weights"/>
</operator>
<connect from_op="Read CSV" from_port="output" to_op="Get Pages" to_port="Example Set"/>
<connect from_op="Get Pages" from_port="Example Set" to_op="Data to Documents" to_port="example set"/>
<connect from_op="Data to Documents" from_port="documents" to_port="output 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_output 1" spacing="0"/>
<portSpacing port="sink_output 2" spacing="0"/>
</process>
</operator>
<connect from_op="Loop" from_port="output 1" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
Scott
Anyone knows how to iterate via the [Get Pages]-Operator correctly?
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts
I was wondering how to apply the same for a job post site - like Indeed. As I'm trying to follow along with one of the Academy lessons (regards text analytics). My scenario would be to crawl a job post site for a job title - say: "Data Scientist". Now since the crawl-web operator in rapidminer has issues - I thought maybe you could step in and help out. Much appreciated. Thanks
Lindon Ventures
Data Science Consulting from Certified RapidMiner Experts