The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"How the
Hello,
using the "Optimize Weights (Evolutionary)" I have found optimal weights and exported them to a file. Its structure is:
In "Read Weights" documentation is referenced "AttributeWeightsApplier", but I cannot find such an operator in RM 5.0. I have also found "Scale by weight" but I am not sure if the behaviour is the same as is in "Optimize Weights (Evolutionary)" applied.
Any help would be really appreciated,
Radone
using the "Optimize Weights (Evolutionary)" I have found optimal weights and exported them to a file. Its structure is:
I can load these weights using:
<?xml version="1.0" encoding="windows-1250"?>
<attributeweights version="5.0beta">
<weight name="Attrib_1" value="0.4805528186159622"/>
<weight name="Attrib_2" value="0.257652703956798"/>
...
</attributeweights>
How can I apply these weights to a learner to get the same results as got from "Optimize Weights (Evolutionary)" learning process?
<operator activated="true" class="read_weights" expanded="true" height="60" name="Read Weights" width="90" x="112" y="390">
<parameter key="attribute_weights_file" value="weight.wgt"/>
</operator>
In "Read Weights" documentation is referenced "AttributeWeightsApplier", but I cannot find such an operator in RM 5.0. I have also found "Scale by weight" but I am not sure if the behaviour is the same as is in "Optimize Weights (Evolutionary)" applied.
Any help would be really appreciated,
Radone
Tagged:
0
Answers
unfortunately we didn't find the time to adapt the documentation. *sigh*
But yes, scale by weight is exactly what the optimization does. But this will only work with learners which are sensitive to the scale of the numerical variables. For example the LinearRegression would only adapt the coefficient and deliver the same result, while the NearestNeighbour learner would act different because of the changed distance.
Greetings,
Sebastian
the result from "Optimize Weights (Evolutionary)" gave me 47.10 % success rate. When I repeat this experiment with weighted input and exact same XValid and training model (see below) I got only 39.03% +/- 0.27% (the result is even worse than without weighting 43.16% +/- 1.96%). Therefore I suppose there is something wrong with it.
"Optimize Weights (Evolutionary)" input example set (z-transform normalized) are:
1: [-3.258...2.452]; mean =-0.000
2: [-2.217...2.425]; mean =-0.000
...
"Optimize Weights (Evolutionary)" output example set are:
1: [-9.775...7.358]; mean =0.000
2: [-3.385...3.704]; mean =0.000
...
if I compute well the weights should be:
1: 3.000
2: 1.5268
but the "Optimize Weights (Evolutionary)".weights returned are:
1: 1.0
2: 0.5075538111704915
...
Do I understand something incorrectly?
Thank you for your time,
Radone
Also, I think that because of the generations and population members of the evolutionary step, you end up generating more folds of cross-validation by the time the weights are optimized than if you just run it through with a single set of weights. When you run a different process that just applies the weights, you're essentially at a different seed in the random number generator when creating the folds, so you are testing it on different cross-sections of data than what the weights were optimized on. It's expected that the reported performance would be different on an arbitrary data set than on the one the weights were optimized on. Or maybe I just talked myself into believing something that is totally inapplicable. :-) Maybe somebody smarter can back me up, or correct me.
Keith
of course the XValidation depends on the random generation of the folds. Since the random number sequence is different in the different runs, the result might differ slightly. But this is a quite big difference, at least for large data sets. What's the standard deviation of the results? It usually gives a good impression how reliable the estimate is.
In general with growing size of data sets, the difference should vanish.
If the data is scaled differently after applying the weights, many learning algorithms might behave different. You are using a linear SVM and in spite of the fact that it returns a linear hyperplane which could be found in the exactly same relative (!) way in different scaled training data, I suspect, that the parameters like C have a different influence depending on the scale.
So I will take a look in this matter as soon as I can, but currently I'm learning to swim...
Greetings,
Sebastian