The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Process Log
Hi
I have a process set up such that I am using a SlidingWindowValidation to validate a GridParameterOptimization. I would like capture various performance statistics using BinominalClassificationPerformance in average for each run through the full sliding window, ie the average for each parameter set. I can only seem to capture either the last window for the last parameter set or every window for every parameter set using ProcessLog
Is there any way to collect the average performance? Going a bit further, is it possible to collect the best and worst performing window within a parameter set?
Thanks for your help.
Brent
I have a process set up such that I am using a SlidingWindowValidation to validate a GridParameterOptimization. I would like capture various performance statistics using BinominalClassificationPerformance in average for each run through the full sliding window, ie the average for each parameter set. I can only seem to capture either the last window for the last parameter set or every window for every parameter set using ProcessLog
Is there any way to collect the average performance? Going a bit further, is it possible to collect the best and worst performing window within a parameter set?
Thanks for your help.
Brent
0
Answers
hmm, I am not sure if I totally got the point. Could you please post the XML process setup here (from the XML tab) so I can see what you intend to do and check if and how this is possible.
Thanks and cheers,
Ingo
Please see the XML below. To clarify, I have a training set with 1140 examples, have a training window of 240 and test window of 60. Hence the total testing size would be 1140 - 240 = 900 data points. There would be 15 iterations of the window (900 / 60). I would like to inspect the TP, FP, TN and FN data for the full 900 data points in the process log for each parameter set. (apologies if I confused when I said average in the post below).
The process below only delivers what appears to be the last window in the process log. The total of TP, FP, TN, FN comes to 60. If I move the process log operater directly under the classification performance I get process log data for each of 15 * 5 parameter set windows. The total for these is ultimately what I am after but slows things down a bit.
I hope this makes it clear. Thanks for your help.
By the way, how do I copy the xml into a window like I see in other posts?
Brent
still not sure if I got you right (sorry) but are you looking for the logged cumulated performance values (cumulated for all 900 data points, i.e. only one value for the whole set) for each parameter combination? Hence, for only optimizing C the result should look like
precision recall f_measure SVM-C SVM-Gamma
0.9455537425537426 0.9610703843618139 0.9525370970632324 0.0 1.0
0.9428607226107226 0.9531385952706045 0.94727680367819 5.0 1.0
0.9324453056797544 0.940922229744281 0.935745777848474 10.0 1.0
0.9340506858209258 0.9291517779738292 0.9303849062070936 15.0 1.0
0.9266490907172111 0.9194995150116083 0.9216341147207373 20.0 1.0
I only tried it for RM 4.2 but the following setup produced this: Please note, however, that you would have to repeat the process for other criteria if you want to log more than 3.
Another side note: as long as you did not perform some inner windowing or some time lag introduction into your data source you might consider to embed a windowing inside of the validation.
There should be an icon in the message editing with a "#" symbol on it. Pressing it will insert the tags {code} and {\code} (please note that the real tags have to be written with [ and ] instead of { and }). Just put your XML code in between.
Cheers,
Ingo
Thanks for quick response. Your comment - are you looking for the logged cumulated performance values (cumulated for all 900 data points, i.e. only one value for the whole set) for each parameter combination? Yes this is correct. I have run the process below on 4.2 and get the same precision and recall numbers (f measure is different???). I then add TP, FP, TN, FN to the process log expecting that they will total 900 for each parameter set. Instead total is 60 which leads me to believe that this is only in relation to the last sliding window, (I confirmed this by running process log for each window and matching up last window for each parameter set). This also makes me suspicious that the precision, recall, etc are calculated on only the last window instead of full 900.
Let me know if I'm wrong here.
Thanks
Brent
but you did notice the slight difference in the setups, yes? I admit they are sooo sublte...
I was not logging the results from the BinominalClassificationPerformance but from the the SlidingWindowValidation (via the generic performance names "performance1", "...2", and "...3"). The sliding window validation reports the total, the performance operator only the last calculated (what should it report else?). Please adapt your setup accordingly and you should get the total results like in my example above.
Cheers,
Ingo
I think I understand now - the performance operator reports only the last window calculated.
Thanks
Brent