The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

reduce No of dimensions

ui3oui3o Member Posts: 9 Contributor II
edited November 2018 in Help
Dear all,

here's what keeps me busy as a newbie to DataMining: my example set contains ~70 k examples of process parameters from 5 machines and a quality label (0/1). each machine parameter has 3-5 variations (3-5 attributes per parameter and machine). Most - but not all - variations referring to the same machine parameter correlate strongly and I was thinking to eliminate "all but one" in such a case, in order to do logistic regression.

When playing arround with weights from correlation matrix, weight by correlation, select by weight, etc. I was only able to eliminate all attributes, which correlate with others, but I want to keep one.

My target is obviously, to identify those attributes, which are influencing the label most.

Maybe I am methodologically totally wrong here, so I appreciate any help.

Thx


ui3o

Answers

  • radoneradone RapidMiner Certified Expert, Member Posts: 74 Guru
    Hello,
    for your purpose the operator "Remove Correlated Attributes" could do the job. The possible pitfall can be memory consumption - few months ago I have used the operator "Select by Random" for number of attributes reduction.
  • ui3oui3o Member Posts: 9 Contributor II
    Hi,

    I already tested the "Remove Correlated Attributes", but it will remove all and not "all but one". Nice tip with the "Select by Random". You iterate the set of selected attributes? Can you post a sample xml?

    Thx for your input.

    Greetings,


    ui3o
Sign In or Register to comment.