The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
KNN prediction performance
Hello All,
I am new here and I need your help. Actually, I have a training dataset and a test data set to be classified via kNN classification. Training---->Knn--->apply model and Test --->apply model... I do not want use cross-validation or split validation but can someone tell me how to measure the performance of knn regarding the prediction of classifying my test dataset. I dont know how connected the output of apply model as an input the performance classification to know how good is the prediction of my classification.
Tnx
Tagged:
0
Answers
Hi,
RapidMiner contains some very useful tutorials and explanations for beginners. It's a bit like the tutorials you have in games nowadays to explain what to do. I would recommend to check them out: Click on new process, select "Learn" and then start from the beginning to understand the principle workings and meanings of ports, parameters and colors, etc...
The chapter 3 of Model, Scoring and Validation will exactly match your question.
Greetings
Sebastian
Thank Sebastian,
I've checked some of them and I got results in the case of using kNN as sub process of cross or split validation but I have no idea how to evaluate the score of the prediction of data which is shown in the attached pic.
But-away tnx for your follow up
Hi,
the per port of cross validation is already delivering the performance.
otherwise- you can simply connect the lab port of Apply Model (2) with the lab port of Performance(2) and look at the result.
~Martin
Dortmund, Germany
Hi,
Tnx for your reply. u right the cross-validation is already giving us the performance of classification but it is not the performance of predicting test set which is directly connected to the apply model 2. I wanna know that how good is my prediction regarding my classification approach. I wanted to connect out put of apply model 2 into performance directly but I got couple of errors.( it says u dont have label , or u need set criterion, but i dont know how!)
If you want to know about the dependency of your classification performance with predicting of new test data set (retrieve 08 ), how do u measure it ?
tnx in advance
Ahhh, in order to apply Performance, the incoming data set needs to have a the label tagged. So you simply need to use a Set Role operator and set your label variable to label.
~Martin
Dortmund, Germany