The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Best practise for allocated memory settings
bojana_trisic
Member Posts: 3 Learner III
in Help
Hello,
We have medium license for RapidMiner Server (64 GB), what would you recommend for Sever Memory and Job Container memory setting?
I am having a problem with out of memory error, even though I am dealing with relatively non-demanding processes. Nevertheless, process sometimes finishes and sometimes breaks, on the same data set.
I am suspecting there is a problem with job container, as it has been killed forcefully. After this had , job run fine.
Thank you in advance,
B.
We have medium license for RapidMiner Server (64 GB), what would you recommend for Sever Memory and Job Container memory setting?
I am having a problem with out of memory error, even though I am dealing with relatively non-demanding processes. Nevertheless, process sometimes finishes and sometimes breaks, on the same data set.
I am suspecting there is a problem with job container, as it has been killed forcefully. After this had , job run fine.
Thank you in advance,
B.
0
Best Answer
-
MarcoBarradas Administrator, Employee-RapidMiner, RapidMiner Certified Analyst, Member Posts: 272 Unicorn@bojana_trisic my current configuration lets the server take up to 11GB on RAM and I'm serving 4 WebServices 2 of them are recommender systems and I handle up to 3k calls a day on them.
Just be aware that depending on how you setup your solution you could create a bottle neck.
If your DB sever is the same as the RM Server and all you agents are on the same computer you might push the server processors to the maximum if your Studio users are working at the same time with a mix of Intensive JOBS, ETL's, model trainnings. So just be sure you leave some free space for the OS and anything that you don´t control on your server.
And remember that you can also use the agents on remote servers so you might Have other server with free RAM that could host your job agents.
5
Answers
After that remember that schedule tasks and process sent to the server are solved by the Queues and its agents.
An each agent takes up to the maximum amount of memory you configured. So if you have 8 agents with 8GB when the server asigns them jobs you'll be using all the memory of your license
RapidMiner Server Memory is used for the apps and web services so again (I let it use up to 12GB since most of my work is donde through agents).
And you should take into consideration that RM works with a backend Database so that DB is also demanding some Ram depending on the complexity of your queries.
I hope this shows you some light on what to look for.
Let me know if you need any help with the configuration. You could also ask for help in the enterprise support they have helped me a lot during my journey with RapidMiner.
Thank you for your prompt reply.
Server is used exclusively for RM, and we do not have web services or apps currently. 1 container, 1 agent.
ow can I check the amount of memory used by container?
What would you recommend for Sever Memory and Job Container memory setting?
Thank you,
Bojana
You can define the amount of memory under the following path assuming you are using the default agent
\RM_HOME\job-agent-home\config
On that file you'll define the amount of memory each job container can use and also the amount of container your Queue can handle.
If you go to RM Server under the execution menu you'll find all the task that were executed on your server and you can see by clicking on the details the amount of memory each job took in order to execute the process you created.
I can´t recommend you a perfect setup because that depend on what you are going to do.
For example I have
4 queues on my server
- Crawler: It has 10 job containers an each can take up to 4GB of RAM since my crawling is not that intensive
- BIG JOBS: 2 containers of 10GB for complex queries, transformations and loops (loops eat a lot of RAM memory)
- ETL JOBS: 3 containers up to 8 GB basically I sync DB with them
- Intensive JOBS: 2 containers up to 20GB
If you want to get deeper on the setting and configurations of the job agents you could go to https://docs.rapidminer.com/latest/legacy/configure/jobs/If you think I could continue helping you just @ me and I'll answer as soon as I have some free time.
This gives me some idea how to create setup that suits our needs.
I have just one more question, how much memory do you assign for Server itself (not the containers)?
Regards,
Bojana