The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
How do I get a higher confidence of predicting true?
Hello,
I am taking a dataset of 4000 rows of customers who bought an insurance policy and trying to find the best 1000 potential buyers of another dataset based on that first data set. I have used optimization with cross-validation and Naive Bayes inside and correctly predicted 112 potential buyers, however, I know there are still more. I have tried many different things but I end up either getting the same potential buyers or less as my confidence of true goes way down. Is there a specific operator or something to change in the optimization process that may get me better confidence or higher sensitivity for true when predicting this?
Thanks
I am taking a dataset of 4000 rows of customers who bought an insurance policy and trying to find the best 1000 potential buyers of another dataset based on that first data set. I have used optimization with cross-validation and Naive Bayes inside and correctly predicted 112 potential buyers, however, I know there are still more. I have tried many different things but I end up either getting the same potential buyers or less as my confidence of true goes way down. Is there a specific operator or something to change in the optimization process that may get me better confidence or higher sensitivity for true when predicting this?
Thanks
0
Answers
The prediction is just an additional attribute created by Apply Model. If you want the 1000 most likely buyers, just sort by the confidence(True) attribute descending and filter the example range 1 to 1000. Many of these will have a prediction of False, but still a higher likelihood than the other 3000.
Otherwise, Apply Threshold belongs *after* applying the model on the test set if I understand your process correctly.
Regards,
Balázs