views:

2401

answers:

1

Hello,

I have 2 columns and multiple rows of data in excel. Each column represents an algorithm and the values in rows are the results of these algorithms with different parameters. I want to make statistical significance test of these two algorithms with excel. Can anyone suggest a function?

As a result, it will be nice to state something like "Algorithm A performs 8% better than Algorithm B with .9 probability (or 95% confidence interval)"

The wikipedia article explains accurately what I need: http://en.wikipedia.org/wiki/Statistical_significance

It seems like a very easy task but I failed to find a scientific measurement function.

Any advice over a built-in function of excel or function snippets are appreciated.

Thanks..

Edit:

After tharkun's comments, I realized I should clarify some points: The results are merely real numbers between 1-100 (they are percentage values). As each row represents a different parameter, values in a row represents an algorithm's result for this parameter. The results do not depend on each other. When I take average of all values for Algorithm A and Algorithm B, I see that the mean of all results that Algorithm A produced are 10% higher than Algorithm B's. But I don't know if this is statistically significant or not. In other words, maybe for one parameter Algorithm A scored 100 percent higher than Algorithm B and for the rest Algorithm B has higher scores but just because of this one result, the difference in average is 10%. And I want to do this calculation using just excel.

+1  A: 

Thanks for the clarification. In that case you want to do an independent sample T-Test. Meaning you want to compare the means of two independent data sets.

Excel has a function TTEST, that's what you need.

For your example you should probably use two tails and type 2.

The formula will output a probability value known as probability of alpha error. This is the error which you would make if you assumed the two datasets are different but they aren't. The lower the alpha error probability the higher the chance your sets are different.

You should only accept the difference of the two datasets if the value is lower than 0.01 (1%) or for critical outcomes even 0.001 or lower. You should also know that in the t-test needs at least around 30 values per dataset to be reliable enough and that the type 2 test assumes equal variances of the two datasets. If equal variances are not given, you should use the type 3 test.

http://depts.alverno.edu/nsmt/stats.htm

tharkun
Thanks for the answer. I have tried TTEST and got a pretty small p-value (8.13177E-06). On further look, this value tells me that the values of one column are significantly different the values of other one. But it does not tell me if one is better or not? Am I right?
someone
no, it doesn't. what would better be, in your case?
tharkun
If the values under one column are higher than the values under the other one.So I should be able to say "Algorithm A has 10% higher values than Algorithm B with .9 probability"
someone
but that's easy. just calculate the mean of each column. the ttest compares the two means. so you can say about the higher mean that this column has significantly higher values than the other.
tharkun