The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Web Crawl Memory Management

DatadudeDatadude Member Posts: 9 Contributor II
I tried to crawl a fairly large site recently and despite giving my rapid miner process 1024M of heap memory, the app eventually crashed with a memory exception.  I was using the Crawl Web component.  I think what I probably need to do is store my urls to follow in a database or on a file rather than trying to hold all that in memory.  Does the Process Documents from Web component enable that functionality?  It seems like it pretty much has the same link following functionality capability as the Crawl Web component.  Can someone confirm?

Answers

  • MariusHelfMariusHelf RapidMiner Certified Expert, Member Posts: 1,869 Unicorn
    Hi,

    1024M of memory is not that much - maybe you should increase that value. Furthermore, did you enable "add pages as attributes"? In that case, the contents of the page is kept in memory. For large crawling projects that can easily consume all your memory, even on large machines. Instead, disable that parameter and use "write pages into files" instead.

    Best regards,
    Marius
Sign In or Register to comment.