The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here

normalization/automodel/ deep learning

franccfrancc Member Posts: 3 Learner I
edited March 2021 in Help
Hello I'm a beginner using rapidminer and I normalized my data in 2 different ways, but when I use "Automodel" I get the same results.

I'm using deep learning and I know that this method (in rapidminer) applies standardization by default when automodeling. I' m trying to do a regression and no matter how I introduce the dataset to train my model (without normalization, z transformation, range transformation), I always get the same results.

So, I want to know why this happen, any guidance will be helpful! Thanks
(maybe it is some Deep learning feature I'm ignoring)

(by the way, I love the platform)




Answers

  • MarcoBarradasMarcoBarradas Administrator, Employee-RapidMiner, RapidMiner Certified Analyst, Member Posts: 272 Unicorn
    Hi @francc thanks for the good comments on the platform.
    Do you get the same result when you apply the model or when you are learning the model?
    If your problem is when you apply the model please check if you are using the Group Model operator and that you connected the models in the order that they were applied to the data. If you are using it inside a cross validation that model would be passed to the validation portion and will apply the same transformation on the data that it would recieve.


  • franccfrancc Member Posts: 3 Learner I
    Hi Marco,  I get the same results when I train the model. I' m not grouping the models, it is just a simple model.
    I tried 3 times and compared the results and got the same error and correlations no matter if I normalize the data or not.
  • franccfrancc Member Posts: 3 Learner I
    Marco te busqué en LinkedIn y vi que hablas español, quizás no me estoy expresando bien en inglés.

    Lo que pasa es que he leído diferentes estudios en el dicen que normalizar los datos antes de entrenar el modelo (sobre todo con redes neuronales) ayuda a que todas las variables estén medidas en una misma escala y así el modelo no entregue mayor peso a variables con valores altos por ejemplo. Entonces quise hacer el ejercicio de entrenar el modelo sin normalizar y luego normalizando, pero resulta que me entrega exactamente el mismo modelo (mismo error, misma correlación y por ende mismas predicciones).

    Saludos y gracias
Sign In or Register to comment.