The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Information gain split to center to 0 numerical attributes for linear models?
Consider scenarios where attributes have a clear correlation with the label (binominal). Couldn't it be beneficial to center on 0 based on where an information gain split would be? Instead of for example centering based on standard deviation or simply scaling from 0 to 1.
It would seem the classes would become more separable, maybe useful for when accuracy is needed. But maybe it wouldn't change much for AUC. Furthermore I have no idea how this scales beyond a simple linear model.
Did anybody try this? I don't think there's a way to do it in RapidMiner right now, is there?
Just a thought, maybe someone with more knowledge can easily twist it around and make something of it.
It would seem the classes would become more separable, maybe useful for when accuracy is needed. But maybe it wouldn't change much for AUC. Furthermore I have no idea how this scales beyond a simple linear model.
Did anybody try this? I don't think there's a way to do it in RapidMiner right now, is there?
Just a thought, maybe someone with more knowledge can easily twist it around and make something of it.
Tagged:
0
Answers
Perhaps attribute weighting would be interesting also (search for "Weight by...").