The Altair Community is migrating to a new platform to provide a better experience for you. In preparation for the migration, the Altair Community is on read-only mode from October 28 - November 6, 2024. Technical support via cases will continue to work as is. For any urgent requests from Students/Faculty members, please submit the form linked here
Answers
the description says: so I say it is a both-sided test with H0: mu1=mu2, H1: mu1!=mu2
greetings
Steffen
As far as I see the following method TTestSignificanceTestOperator#getProbability(PerformanceCriterion pc1, PerformanceCriterion pc2) calculates the p-value of the test. The test itself is ''performed'' here:
TTestSignificanceTestOperator#TTestSignificanceTestResult#toString() , i.e. here: the comparison of pvalue < alpha is no clear indication for a left-sided test...
this is the used formula:
http://en.wikipedia.org/wiki/Student';s_t-test#Unequal_sample_sizes.2C_equal_variance => two-sided-test
BUT looking at this formula raises another questions:
First: I have read somewhere that the assumption of equal variances is not a problem if the sizes of the test samples are equal. On the other hand, if this is not valid, no one can guarantee anything for the true alpha error. What do you think about it?
Second:
I thought in case of a two-sided test the alpha parameter must be divided by 2. Or is this already implied by the test statistics ?
greetings
Steffen
PS: I prefer such argumentations with class names and line numbers
I guess everything is clear now
greetings
Steffen