The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

Increasing Allocated Memory oF AI HUB via RapidMiner Deployment Administration

MelihckMelihck Member Posts: 8 Learner I
Hi nice people of RM;

After hours and days I could connected my RM Studio to VM in Azure with 64gb RAM and 8 core CPU. And excited to run more complex tasks.

But an awkward issue is only step ahead on me editing .env file via RapidMiner Deployment Administration. Default numbers are like these: SERVER_MAX_MEMORY=2048M and JOBAGENT_CONTAINER_MEMORYLIMIT=8192

I am trying to edit .env file to increase memory limit but each time I got "Invalid Configration" error. I am trying to type:
SERVER_MAX_MEMORY=8192M
JOBAGENT_CONTAINER_MEMORYLIMIT=16384

Also Even tho I just delete a "#" sign from .env file or just copied everything in it and paste as like that still same error occurs.

thanks in advance, your name will be in the history of science, I promise :)

Answers

  • kaymankayman Member Posts: 662 Unicorn
    edited May 2021
    Your server max memory really doesn't need more than the default 2048M, as this is just to run the 'container' logic. In reality you'll consume like half of it.

    The configuration depends on your license, not on the amount of memory you can stuff in your server.

    So assume you have a license allowing you to run 64G in total, and you want to be able to run 4 processes in parallel (job agents) your env file would be like 2 GB for your server and ((64 total -2 already taken /4 agents )) so like 15.5G per agent.

    Any combination that exceeds your license total will show the cruel configuration error. So stick to 2048 for the server, and just increase the agent alocation (based on total number) until you get the error. 

    You could create different queues, so your default may have 4 agents, another queue may have 2 agents and your memory can than be twice as high per agent, and another one can for instance have just one agent and you give it the full 62G. 

    As long as you run just one queue at the same time this is within the license logic. 



  • MelihckMelihck Member Posts: 8 Learner I
    Hi Kayman thanks for detailed explanation.

    Yes, my licance allows me to have 64gb. And I want to run only one process :)

    I already tried to many ways to edit env even reduce the JOBAGENT_CONTAINER_MEMORYLIMIT but I cannot change anything in there.
    .env file is some how protected I guess.

    Sharan_Gadi described at below:
    https://community.rapidminer.com/discussion/56411/how-can-i-change-the-maximum-memory-setting-for-rm-server-on-azure

    But it would be much more better if you have any easier way to do that.
  • kaymankayman Member Posts: 662 Unicorn
    Check your agent configuration files also (home - config)
    Not sure about the correct path but one of the files contains the settings of your default queue. It's this file that mentions how many agents it will run (change to 1 in your scenario) and set the allocated memory. 
  • aschaferdiekaschaferdiek Employee-RapidMiner, Member Posts: 76 RM Engineering
    edited June 2021
    Hi there. That the SERVER_MAX_MEMORY property is just responsible for spawning containers is not correct. This property is  responsible how much memory is available for all execution directly within AI Hub (first type of execution), which is primarily Web Services. And in addition it determines the memory for the entire web server which is started on AI Hub start.

    Second type is the RTS application which is able to provide similar functionality to Web Services. Please see our documentation for this: https://docs.rapidminer.com/latest/scoring-agent/.

    The third, and probably the execution type you like to increase memory for, are "batch jobs". You schedule or directly click on execute within AI Hub/Studio drop down menu. They are handled in a distributed fashion by Job Agents. Job Agents itself are only responsible for managing/spawning Job Containers (and for this only allocate about 128 up to 256M RAM). Job containers will do the actual work and run RapidMiner processes. You already found the correct property. JOBAGENT_CONTAINER_MEMORYLIMIT  will set the max allocated memory of each spawned Job Container by the Job Agent (JOBAGENT_CONTAINER_COUNT determines how many, default is 1).

    When in a docker environment, you probably should use environment variables to set those properties. In a native install, you can directly change the agent.properties file @kayman mentioned. For the bundled Job Agent, it resides in rapidminer-server-home/job-agent-home/home/agent.properties otherwise it's located where you've extracted the downloaded JobAgent to.


Sign In or Register to comment.