The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"Interpretation/Meaning of Performance Measure"
choose_username
Member Posts: 33 Contributor II
Hello,
i have build a workflow, which shall classify examples with a decision tree. I used a Performance-measure-Operator too. In the result are accuracy , precision and recall listed.
What does this measures mean? Is there a difference between accuracy and precision and what is the meaning of recall?
greetings
user
i have build a workflow, which shall classify examples with a decision tree. I used a Performance-measure-Operator too. In the result are accuracy , precision and recall listed.
What does this measures mean? Is there a difference between accuracy and precision and what is the meaning of recall?
greetings
user
Tagged:
0
Answers
those measures are all taken from a confusion matrix (http://en.wikipedia.org/wiki/Confusion_matrix):
The ratio of correctly classified examples compared to the number of all examples is called accuracy and is calculated as (a+d)/(a+b+c+d).
The ratio of true positives to all as positive predicted examples is called precision and is calculated as a / (a+b).
The ratio of true positives to all actually positive examples is called recall and is calculated as a / (a+c).
Accuracy is also available in the case of more than two classes, precision and recall are only available for two-class-problems (you can, however, always calculate a per class precision and recall).
Cheers,
Ingo
Greetings
Nutzer
If you have more than two classes and search for a single-number-evaluation to optimize there is not much left beside accuracy and kappa (and some others).
Cheers,
Ingo
thanks for the information. I will keep that in mind, when working on it.
greetings
User
I always think of it as the normalized accuracy score.
Kappa = 0, all predictions are the majority class.
Kappa = 1, all predictions are correct.
Kappa = -1, all predictions are wrong.
If you know only know accuracy is 99%, you don't really know much.
Because you might have a dataset with 9900 negative and only 100 positive example.
And then you are only interested in systems with an accuracy greater then 99%.
here it is (fresh from the source code ): Cheers,
Ingo