The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

"Getting CTA DB error message with web crawler - help!"

michael_crowdesmichael_crowdes Member Posts: 2 Contributor I
edited June 2019 in Help

I'm trying to use the crawl web operator in studio version 7.6.001 and as soon as the crawler starts I get the following in the log "Not able to connect to the CTA DB" and then after a while the operation just times out. I can't figure out what's going on. It seems to be related to these specific operators because I can use others without a problem. Anyone have any ideas?

Answers

  • sgenzersgenzer Administrator, Moderator, Employee-RapidMiner, RapidMiner Certified Analyst, Community Manager, Member, University Professor, PM Moderator Posts: 2,959 Community Manager

    hello @michael_crowdes - welcome to the RapidMiner Community.  Could you post your XML in this thread so we can see your process?  Please use the </> tool.

     

    Thanks.


    Scott

     

  • michael_crowdesmichael_crowdes Member Posts: 2 Contributor I
    <?xml version="1.0" encoding="UTF-8"?><process version="7.6.001">
    <context>
    <input/>
    <output/>
    <macros/>
    </context>
    <operator activated="true" class="process" compatibility="7.6.001" expanded="true" name="Process">
    <process expanded="true">
    <operator activated="true" class="web:crawl_web_modern" compatibility="7.3.000" expanded="true" height="68" name="Crawl Web" width="90" x="246" y="187">
    <parameter key="url" value="http://parker.com"/>
    <list key="crawling_rules">
    <parameter key="store_with_matching_url" value="*parker.com*"/>
    </list>
    <parameter key="max_crawl_depth" value="3"/>
    <parameter key="retrieve_as_html" value="true"/>
    <parameter key="write_pages_to_disk" value="true"/>
    <parameter key="include_binary_content" value="true"/>
    <parameter key="output_dir" value="C:\Users\533537\Desktop\webPages"/>
    <parameter key="ignore_robot_exclusion" value="true"/>
    </operator>
    <connect from_op="Crawl Web" from_port="example set" to_port="result 1"/>
    <portSpacing port="source_input 1" spacing="0"/>
    <portSpacing port="sink_result 1" spacing="0"/>
    <portSpacing port="sink_result 2" spacing="0"/>
    </process>
    </operator>
    </process>
  • FBTFBT Member Posts: 106 Unicorn

    Hi Michael,

     

    it seems that you have an error in the regex field of the crawling rules. "*parker.com*" is not a valid expression. What exactly have been your intentions with that expression? Capture everything that contains "parker.com", no matter what elements are before or after that? If so, try this expression (without quotation marks): ".*parker.com.*"

Sign In or Register to comment.