The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
How to combine several Data Mining-Algorithms and get one final result
Hi all,
i have two datasets with a label that contains two different values (Fraud=yes or Fraud=no).
I recognized that one algorithmn is better for fraud detection "yes" and the other one for fraud detection "no". So I want to know, how to combine both result sets / algorithmn to get an better result. I will aggregate the results and got one confusion matrix. Which operator can i use for this task?
Many thanks for your help!
Regard,
Johnson
i have two datasets with a label that contains two different values (Fraud=yes or Fraud=no).
I recognized that one algorithmn is better for fraud detection "yes" and the other one for fraud detection "no". So I want to know, how to combine both result sets / algorithmn to get an better result. I will aggregate the results and got one confusion matrix. Which operator can i use for this task?
Many thanks for your help!
Regard,
Johnson
0
Answers
Please take a look at this process. It might help you to solve your problem. Best regards
Helge
I want to combine the output (confusion matrix) of two process using naive Bayes combiner method and generate a resultant confusion matrix.
PLease help me
it does not seem that obvious to me, what you want to do. A naive bayes runs on indivudual observations, the confusion matrix however is aggregated over all observations. Furthermore the confusion matrix uses information about the label so this would not be possible to be applied on non-labeled data. If you really need that, you can use performance to data to get the information into an example set.
Are you sure you dont want to use a naive bayes on the confidences produced by two different learners? That is possible. A straight forward way of just averaging the confidences would be to use the Vote operator. If you need to use a naive bayes on the confidences, you need to use a split inside your cross validation. If needed i can provide an example process.
Cheers,
Martin
Dortmund, Germany