The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
"[SOLVED] Prediction trend accuracy?"
xiaobo_sxb
Member Posts: 17 Contributor II
I did a time series prediction and applied the "forecasting performance" operator. In order to fine tune the parameters, I used the "Optimize Performance (Grid)" to compare the performance, along with a "Log" operator to record all the parameters and output performance. I found it's strange that the prediction_trend_accuracy is different with what I got originally for the same set of parameters. I created a sample process for this problem. The prediction_trend_accuracy shown in the "performance grid" is different with the one recorded in "Log" window. Anybody can tell me where I'm wrong?
By the way, sometimes I got the prediction_trend_accuracy unknown. Is there anybody can explain what does it mean for "unknown"?
Thank you!
Steven
Here is the sample process:
By the way, sometimes I got the prediction_trend_accuracy unknown. Is there anybody can explain what does it mean for "unknown"?
Thank you!
Steven
Here is the sample process:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<process version="5.1.014">
<context>
<input/>
<output/>
<macros/>
</context>
<operator activated="true" class="process" compatibility="5.1.014" expanded="true" name="Process">
<process expanded="true" height="431" width="614">
<operator activated="true" class="generate_data" compatibility="5.1.014" expanded="true" height="60" name="Generate Data" width="90" x="25" y="163">
<parameter key="number_examples" value="50"/>
</operator>
<operator activated="true" class="series:windowing" compatibility="5.1.002" expanded="true" height="76" name="Windowing" width="90" x="246" y="165">
<parameter key="horizon" value="1"/>
<parameter key="window_size" value="3"/>
<parameter key="create_label" value="true"/>
<parameter key="label_attribute" value="label"/>
</operator>
<operator activated="true" class="optimize_parameters_grid" compatibility="5.1.014" expanded="true" height="94" name="Optimize Parameters (Grid)" width="90" x="447" y="165">
<list key="parameters">
<parameter key="W-MultilayerPerceptron.L" value="[0.1;1;3;linear]"/>
<parameter key="W-MultilayerPerceptron.M" value="[0.1;1;3;linear]"/>
</list>
<parameter key="parallelize_optimization_process" value="true"/>
<process expanded="true" height="428" width="678">
<operator activated="true" class="series:sliding_window_validation" compatibility="5.1.002" expanded="true" height="112" name="Validation" width="90" x="179" y="165">
<parameter key="training_window_width" value="5"/>
<parameter key="training_window_step_size" value="1"/>
<parameter key="test_window_width" value="5"/>
<parameter key="average_performances_only" value="false"/>
<parameter key="parallelize_training" value="true"/>
<parameter key="parallelize_testing" value="true"/>
<process expanded="true" height="428" width="323">
<operator activated="true" class="weka:W-MultilayerPerceptron" compatibility="5.1.001" expanded="true" height="76" name="W-MultilayerPerceptron" width="90" x="116" y="30">
<parameter key="L" value="1.0"/>
<parameter key="M" value="1.0"/>
</operator>
<connect from_port="training" to_op="W-MultilayerPerceptron" to_port="training set"/>
<connect from_op="W-MultilayerPerceptron" from_port="model" to_port="model"/>
<portSpacing port="source_training" spacing="0"/>
<portSpacing port="sink_model" spacing="0"/>
<portSpacing port="sink_through 1" spacing="0"/>
</process>
<process expanded="true" height="428" width="323">
<operator activated="true" class="apply_model" compatibility="5.1.014" expanded="true" height="76" name="Apply Model" width="90" x="45" y="30">
<list key="application_parameters"/>
</operator>
<operator activated="true" class="series:forecasting_performance" compatibility="5.1.002" expanded="true" height="76" name="Performance" width="90" x="112" y="165">
<parameter key="horizon" value="1"/>
</operator>
<connect from_port="model" to_op="Apply Model" to_port="model"/>
<connect from_port="test set" to_op="Apply Model" to_port="unlabelled data"/>
<connect from_op="Apply Model" from_port="labelled data" to_op="Performance" to_port="labelled data"/>
<connect from_op="Performance" from_port="performance" to_port="averagable 1"/>
<portSpacing port="source_model" spacing="0"/>
<portSpacing port="source_test set" spacing="0"/>
<portSpacing port="source_through 1" spacing="0"/>
<portSpacing port="sink_averagable 1" spacing="0"/>
<portSpacing port="sink_averagable 2" spacing="0"/>
</process>
</operator>
<operator activated="true" class="log" compatibility="5.1.014" expanded="true" height="76" name="Log" width="90" x="425" y="179">
<list key="log">
<parameter key="Performance" value="operator.Performance.value.prediction_trend_accuracy"/>
<parameter key="L" value="operator.W-MultilayerPerceptron.parameter.L"/>
<parameter key="M" value="operator.W-MultilayerPerceptron.parameter.M"/>
</list>
</operator>
<connect from_port="input 1" to_op="Validation" to_port="training"/>
<connect from_op="Validation" from_port="averagable 1" to_op="Log" to_port="through 1"/>
<connect from_op="Log" from_port="through 1" to_port="performance"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="source_input 2" spacing="0"/>
<portSpacing port="sink_performance" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
</process>
</operator>
<connect from_op="Generate Data" from_port="output" to_op="Windowing" to_port="example set input"/>
<connect from_op="Windowing" from_port="example set output" to_op="Optimize Parameters (Grid)" to_port="input 1"/>
<connect from_op="Optimize Parameters (Grid)" from_port="parameter" to_port="result 1"/>
<portSpacing port="source_input 1" spacing="0"/>
<portSpacing port="sink_result 1" spacing="0"/>
<portSpacing port="sink_result 2" spacing="0"/>
</process>
</operator>
</process>
Tagged:
0
Answers
Best Regards,
Marius
Thank you, you explained what kind of performance will be generated at each steps. This made me more clear, as I'm the newbie in this area.
But my problem is, I log the average performance after the x-validation, that means I should get the average performance. If I include all the nested process in the "optimize performance (grid)" operator, I should get the average performance for each iteration (different parameters). The best parameter is choosen in the parameterset result screen. I found the performance shown there is different with the performance logged in the log operator. For example, I got the parameterset result as:
Parameter set:
Performance:
PerformanceVector [
-----prediction_trend_accuracy: 0.836 +/- 0.184 (mikro: 0.836)
]
W-MultilayerPerceptron.L = 0.1
W-MultilayerPerceptron.M = 0.1
But in the log, there is no "prediction_trend_accuracy: 0.836 +/- 0.184 (mikro: 0.836)"
Cheers, Marius
That exactly fixed the problem. thank you so much.
Best Regards
Steven