The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"Support Vector Clustering"
vijaypshah
Member Posts: 30 Maven
Hi,
I am trying to cluster the 20,000 sample using support vector machines. It takes around 48-hr to get the clustering result. How can I optimize this process to get some good results with acceptable time limit (say 15-20 minutes).
Regards,
Vijay
I am trying to cluster the 20,000 sample using support vector machines. It takes around 48-hr to get the clustering result. How can I optimize this process to get some good results with acceptable time limit (say 15-20 minutes).
Regards,
Vijay
Tagged:
0
Answers
you have at least two options:
- reduce the maximum number of iterations from 100000 to a smaller value, let's say 1000. This of course might affect the quality of the output.
- increase the size for the kernel_cache (for 20000 examples you would need about 3Gb memory for a full kernel matrix caching). Try larger values and increase the amount of memory which can be used by RapidMiner if necessary / possible. This should lead to a great speed up without loosing quality.
Cheers,Ingo
Is it possible to subset the data in smaller chunck and than do clustering, and combine the final clustering result?
Regards,
Vijay
beside increasing the memory available in total for RM you probably also have to increase the memory defined by the kernel_cache parameter. However, since prices for memory are rather low at the moment, increasing the total amount of memory is probably the most simple idea if you have a 64 bit system anyway. In principle yes. You could for example use the cross validation operator (with a dummy learner) for sampling by placing an ExampleSetWriter with macro option %{a} in the filename to build k disjunct parts of your data. Then apply the clustering individually and merge the results with the operator ExampleSetMerge. It might however be necessary to remap the cluster labels appropriately before.
Cheers,
Ingo
Regards,
Vijay
Cheers,
Ingo
Thanks for valuable input.
Regards,
Vijay
Cheers,
Ingo
I will post the link to those paper if I come across those again.
yes, we would appreciate that. If our schedule allows, we will certainly have a look at these approaches.
Regards,
Tobias
Working with RM 4.2...
I have a problem with support vector clustering, actually with these three operators :
- Support Vector Clustering (here it is !) : the ouput is not recognised as either a flat cluster or a hierarchical one
- KernelKmeans : Same problem, moreover the "neural choice" is not there in "choose the kernel type"
- FlattenClusterModel : when "performance?" is true, checking the experiment's syntax does not recognize "performance vector" produced
I wanted to use one of these operators in a experiment containing the following code :<operator name="analyse" class="OperatorChain" expanded="yes">
<operator name="EvolutionaryParameterOptimization" class="EvolutionaryParameterOptimization" expanded="yes">
<list key="parameters">
<parameter key="KernelKMeans.kernel_degree" value="[0.0;2.147483647E9]"/>
<parameter key="KernelKMeans.k" value="[2.0;2.147483647E9]"/>
</list>
<operator name="KernelKMeans" class="KernelKMeans">
<parameter key="add_cluster_attribute" value="false"/>
<parameter key="kernel_type" value="KernelPolynomial"/>
</operator>
<operator name="ItemDistributionEvaluator" class="ItemDistributionEvaluator">
<parameter key="keep_flat_cluster_model" value="false"/>
<parameter key="measure" value="SumOfSquares"/>
</operator>
</operator>
</operator>
Do you reproduce these behaviours ?
Cheers,
Jean-Charles.