The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Optimize Selection (FORWARDS): what is it that fills up the memory?
I'm running a forwards with a linear regression, 300/300 (T/F) cases with 800 attributes.. X-Val 3 folded (Parallel) running on 8 threads.
Within a couple hundered tryouts with just 1 attribute, memory gets maxed at about 10GB and execution time increases exponentially.. maybe these pictures help understand:
I was wondering why this is happening, as there seems to be some kind of garbage data inescapably, exponentially filling my memory.
I thought maybe the X-Val subsets where left in memory and that was the problem, but it wasn't, no Val and still same issue.
Tried a Free Memory after X-Val (inside optimize selection) but it seems to be worse.
I can't figure out why this behaviour in memory consumption is normal... there's something wrong, right? There must be something I can do about this.
Thanks a lot for your insight.
Regards.
Within a couple hundered tryouts with just 1 attribute, memory gets maxed at about 10GB and execution time increases exponentially.. maybe these pictures help understand:
I was wondering why this is happening, as there seems to be some kind of garbage data inescapably, exponentially filling my memory.
I thought maybe the X-Val subsets where left in memory and that was the problem, but it wasn't, no Val and still same issue.
Tried a Free Memory after X-Val (inside optimize selection) but it seems to be worse.
I can't figure out why this behaviour in memory consumption is normal... there's something wrong, right? There must be something I can do about this.
Thanks a lot for your insight.
Regards.
0
Answers
I wonder what happens at around 4750 validations..
can you please copy your process setup (xml) here?
Regards,
Marco
Some metadata:
800 attributes, binominal classification, 300/300 examples. This is the forwards via the optimize selection operator, it's just default but showing stop dialog and adding user result individual selection. Everything else same as before. On the other hand I'm doing an evolutionary selection with roughly 400 attributes: And it's having no memory problems at all, TOTAL is stuck at 700MB while the forwards overloaded it at 11GB. Also time/val_count is lineal.
Thanks!!
I have created an issue in our internal tracker. Unfortunately there is not much else I can do at this point :-\
Regards,
Marco
Here is the process I used which doesn't seem to cause memory problems: