The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
When does a grid search end?
Hello everyone,
I got a quick question and I think you guys can help me out:
When I do a grid search for optimal parameters C (11 different values) and Gamma (10 different values) for my SVM, when will the search stop?
Does it stop immediately the first time it hits a 100% accuracy with an "optimal pair" and then disregards the other untested pairs (assuming there will be an 100% accuracy)?
I also ask this question against the background of time-intense calculations..
Thanx a lot in advance,
Sasch
I got a quick question and I think you guys can help me out:
When I do a grid search for optimal parameters C (11 different values) and Gamma (10 different values) for my SVM, when will the search stop?
Does it stop immediately the first time it hits a 100% accuracy with an "optimal pair" and then disregards the other untested pairs (assuming there will be an 100% accuracy)?
I also ask this question against the background of time-intense calculations..
Thanx a lot in advance,
Sasch
0
Answers
no, Grid Search will test ALL combinations. You might be surprised, but I NEVER saw any 100% accuracy if validated correctly on a real world data set. If you receive a 100% accuracy you rather should double check your process setup than searching for a way to abort calculations...
Greetings,
Sebastian
Ok from the start:
I have 54 example sets of EEG data (all filtered and baseline corrected), each containing ~180 attributes + 1 label.
I have 3 different labels (=classes) that means 18 example sets for each class. The labels are numbers but "transformed" into text with the import wizard.
So when I now import the whole dataset and do a grid search for best SVM params and cross-validations and so on, I get a 100% accuracy (?).
Can't that be? Or is it in the nature of the SVM that it can find a optimal 100% solution when the number of example sets are much less than the number of attributes (=> "curse of dimensionality"?) ?
Perhaps my process setup is also wrong. I'd appreciate it if you would be so kind to check it. I put the code below:
Sorry I have to cut it into 2 pieces. Hope you don't mind.
So here's part 1:
Sasch