The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"Neural Net and Sigmoid Function"
chaosbringer
Member Posts: 21 Contributor II
Hi,
i am training a neural net. My goal is to predict a function with its values within 0 and 1.
The labeled attribute of my example set has therefore its values within 0 and 1, too.
Now, the neural net in rapidminer uses a sigmoid function, with its values between -1 and 1. Thus, the inputdata must get normalized.
Fortunately rapidminer does that for me too :-D
But i apply the resulting model, it predicts for some examples values less than 0. So, how can i prevent the net from returning values less than 0?
Obviously i may set any value less than 0 to 0 (actually i am not sure how to accomblish this in rapid miner). But would it not be reasonable to include this modification into the learning algorithm?
Thank you very much
i am training a neural net. My goal is to predict a function with its values within 0 and 1.
The labeled attribute of my example set has therefore its values within 0 and 1, too.
Now, the neural net in rapidminer uses a sigmoid function, with its values between -1 and 1. Thus, the inputdata must get normalized.
Fortunately rapidminer does that for me too :-D
But i apply the resulting model, it predicts for some examples values less than 0. So, how can i prevent the net from returning values less than 0?
Obviously i may set any value less than 0 to 0 (actually i am not sure how to accomblish this in rapid miner). But would it not be reasonable to include this modification into the learning algorithm?
Thank you very much
Tagged:
0
Answers
So one of the simplest solutions would be to use the operator Numerical to Polynominal to convert your output/label attribute only, prior to the Neural Net operator. After conversion 0 and 1 will be regarded as (non quantitative) labels rather than numbers, and the neural net will have to stick to these as predictions, so that you do not get negative values any more.
Less likely, if instead of the set {0, 1} you actually meant the interval [0,1] as values for your function, this is a different matter, and you would need in this case to replace the predicted negative values by 0 (and the values over 1 by the latter) via your process, outside the neural network operator in any case.
As a remark regarding theory versus practice, in theory the final calculated values of an output node of a neural net are rescaled to the value range of the numerical label, so that one does not get out-of-bounds values as you seemed to have got in practice.
Dan