The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"[Solved] Performance measurement with trend direction"
Dear all,
I am looking for a certain kind of performance measurement. "Relative error" for example gives an idea to which degree the prediction fits the label. But in my special case I also want to know whether the prediction over- or underestimates the label.
A workaround might be to use something like "average(prediction) - average(label)" in addition to the relative error. But of course it would be better to have this in one operator.
Please let me know your ideas...
Kind regards
Sachs
I am looking for a certain kind of performance measurement. "Relative error" for example gives an idea to which degree the prediction fits the label. But in my special case I also want to know whether the prediction over- or underestimates the label.
A workaround might be to use something like "average(prediction) - average(label)" in addition to the relative error. But of course it would be better to have this in one operator.
Please let me know your ideas...
Kind regards
Sachs
Tagged:
0
Answers
It seems that I can have multiple performance criteria if I use the attached setup. But in this case I have a general question of understanding:
1) Which of the both performance operators is being used to train the model? (Or can it be multiple?)
I thought that validation works like: take performance to adapt SVM > apply model > evaluate performance > back to first step
2) Is there a way to log the standard deviation of the performance measure which is shown in the result view as well?
Kind regards
Sachs
the training of the model is completely independent of the chosen performance measure - the algorithm (in your case, the SVM) always uses the same methods to create the model, and the performance operators are only used to estimate the result of those methods. A detailed description of the Cross Validation can be found here: http://en.wikipedia.org/wiki/Cross-validation_(statistics)#K-fold_cross-validation
Furthermore, you are currently logging the performance of each iteration of the X-Validation. Usually, you do not want to do that, but are only interested in the performance of the entire X-Validation. For that, you have to place the Log operator outside of the X-Validation. Then you can log the final performance by logging the "performance" value of the Validation operator. The standard deviation is available as the "deviation" value of the same operator.
You can easily create custom performance measures: you can perform arbitrary operations on the output of Apply Model, e.g. with Aggregate and Generate Attributes, and then use the Extract Performance operator to provide a value of the resulting example set as performanace value.
Best regards,
Marius
Hi Marius,
Thank you for all! This helps a lot!
Kind regards
Sachs
PS: Here is my humble contribution to this topic. I set up a sample process like described above which does an individual performance calculation. For anyone who might be in need of it...
I was just fooling around when I came across this:
To my understanding it should be possible to extract the results of both performance operators. However, I always get just the same value twice...
Best regards
Sachs
- Connect the per output of the first performance operator to the per input of the second performance output
- connect the second per output to the first ave output of the validation
- log performance and performance2 instead of performance and performance1
Best regards,
Marius
Though I don't understand the underlying logic, it works pretty well the way you described it
Thank you!
Cheers
Sachs
I got the part with passing a value into another operator. What makes me puzzeld is the part that performance and performance1 values are the same and that a performance value which is delivered to avg2 cannot be logged from performance 1 or 2.
Cheers
Sachs