The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
HOW to filter data with two models applied in squence
inceptorfull
Member Posts: 44 Contributor II
hi all, I am trying to apply NEural network for Credit default problem, so I applied NN and got me 85% accuracy,
now I want to use such results to be more filtered using KNN to get higher accuracy or closer case to the predicated case for more confirmation of the predication
so how to do that in rapid miner?
NN--- KNN? and
what if want to do in reverse order so let KNN assign nearest neighbour then let NN predicat from such neighours? is that possible?
now I want to use such results to be more filtered using KNN to get higher accuracy or closer case to the predicated case for more confirmation of the predication
so how to do that in rapid miner?
NN--- KNN? and
what if want to do in reverse order so let KNN assign nearest neighbour then let NN predicat from such neighours? is that possible?
Tagged:
0
Answers
Best,
Martin
Dortmund, Germany
OR
let NN learn then Filter the NN results by the neighourhood so the output of NN be the input for KNN to get new predication or Closest neighbours for reassign the bad aaplicant in the default problem to good again ( if they really good applicant)
it will be great to give me overall idea how to implement both to get the better accuracy
thanks a lot for your support
have a look at the attached process. I built this a while ago for a forum user. It runs a learner per cluster calculated with k-means. this is somehow pretty close to what you want to do.
Best,
Martin
Dortmund, Germany
do you think if I applied Neural network then KNN to get higher accuracy will be good? if so , how to put the two models so both enhance the results?
And yes, Meta-Learning can help. I am not sure if your concrete idea helps, but i would give it a try.
Dortmund, Germany
seems stacking seems may work but when I use both the KNN and NN in the base learner and use the NN in the stacking model learner it stops and give me the following error" binomial attribute is not supported"
~Martin
Dortmund, Germany
I tried it gives me another error in the apply model , " the input exampleset doesnot match the training exampleset, missing attribute: base prediction0=Good"
I tried to make the stacking model learner Decision Tree, the percentage actually increse, but am not sure is it right or wrong, since I used base learner NN and KNN , I wish I could try it with the NN
the trick is that you need to combine the models (preprocessing from nominal to numerical and the NN) using Group Models.
~Martin
Dortmund, Germany