EFFECT SIZE EQUATIONS 

Paul D. Ellis,
Hong Kong Polytechnic University 

The standardized mean difference (d)  
To calculate the standardized mean difference between two groups, subtract the mean of one group from the other (M1 ¡V M2) and divide the result by the standard deviation (SD) of the population from which the groups were sampled. If the population standard deviation is unknown, we can estimate it a number of different ways. Three different methods for estimating the population standard deviation give rise to three of the betterknown effect size indexes, as follows:  
Choosing among these three equations requires an examination of the standard deviations of each group in our study. If they are roughly the same it may be reasonable to assume they are estimating a common population standard deviation. In this case we can pool the two standard deviations to calculate a Cohen's d index of effect size. To calculate the pooled standard deviation (SDpooled) for two groups of size n and with means we could use the following equation from Cohen (1988, p.67):  
However, in practice the simpler equation from Cohen (1988, p.44) is often used instead:  
If the standard deviations of the two groups differ, then the homogeneity of variance assumption is violated and pooling the standard deviations is not appropriate. One solution is to insert the standard deviation of the control group into the equation and calculate Glass's delta (Glass et al. 1981, p.29). The logic is that the standard deviation of the control group is untainted by the effects of the treatment and will therefore more closely reflect the population standard deviation. The strength of this assumption is directly proportional to the size of the control group. The larger the control group, the more it is likely to resemble the population from which it was drawn. In the effect size calculator, group 1 is assumed to be the experimental group and group 2 is assumed to be the control group.  
Another approach, which is recommended if the groups are dissimilar in size, is to weight each group's standard deviation by its sample size (n). The pooling of weighted standard deviations is used in the calculation of Hedges' g.1 To calculate the weighted and pooled standard deviation (SD*pooled) we would use the following equation from Hedges (1981, p.110):  
Hedges' g was also developed to remove a small positive bias affecting the calculation of d (Hedges 1981). An unbiased version of d can be calculated using the following equation adapted from Hedges and Olkin (1985, p.81):  
In the effect size calculator the Hedges' g is this unbiased estimator.  
To calculate a standardized mean difference using tstats and sample size, the following equation from Rosenthal and Rosnow (2008, p.385) is used:  
If sample sizes are equal (n1 = n2), the previous equation reduces to...  
where df = N ¡V 2 (Rosenthal 1984, pp.23, 357).  
To calculate a standardized mean difference from the correlation coefficient r, the following equation from Friedman (1968, p.246) is used:  
The three indexes ¡V Cohen's d, Glass's £G and Hedges' g ¡V convey information about the size of an effect in terms of standard deviation units. A score of .50 means that the difference between the two groups is equivalent to onehalf of a standard deviation while a score of 1.0 means the difference is equal to one standard deviation. The bigger the score the bigger the difference between the groups and the bigger the effect. One advantage of reporting effect sizes in standardized terms is that the results are scalefree, meaning they can be compared across studies. If two studies independently report effects of size d = .50, then their effects are identical in size.  
Measuring the strength of association (r)  
The correlation coefficient (r) quantifies the strength and direction of a relationship between two variables, say, X and Y. The variables may be either dichotomous or continuous. Correlations can range from 1 (indicating a perfectly negative linear relationship) to 1 (indicating a perfectly positive linear relationship) while a correlation of 0 indicates that there is no relationship between the variables.  
The correlation coefficient is probably the best known measure of effect size, although many who use it may not be aware that it is an effect size index. Like Cohen's d, the correlation coefficient is a standardized metric. Any effect reported in the form of r or one of its derivatives can be compared with any other. Some of the more common measures of association include:  


The pointbiserial correlation coefficient (rpb) can be calculated from d using the following equation from Rosenthal (1984, p.25):  
However, if the groups being compared are unequal in size, a better equation is provided by Aaron, Kromrey and Ferron (1998):  
The phi coefficient correlation (£p or r£p) can be calculated from a chisquare statistic with one degree of freedom as follows (Friedman 1968, p.246):  
Occasionally one might want to calculate the strength of association (r) using the standard normal deviate (z). The equation for this comes from Rosenthal (1984, p.25):  
Notes:  
1 Beware the inconsistent terminology. What is labeled here as g was labeled by Hedges and Olkin as d and vice versa. For these authors writing in the early 1980s, g was the mainstream effect size index developed by Cohen and refined by Glass (hence g for Glass). Since then g has become synonymous with Hedges¡¦ equation (not Glass¡¦s) and the reason it¡¦s called Hedges¡¦ g and not Hedges¡¦ h is because it was originally named after Glass ¡V even though it was developed by Larry Hedges. Confused?  
2 At least one online calculator calculates d using this second equation. This will work fine when group sizes are equal but will generate inaccurate estimates when they are not. In contrast, the effect size calculator used here generates accurate estimates in both cases.  
References  
Aaron, B., J.D. Kromrey, J. Ferron (1998), "Equating rbased and dbased effect size indices: Problems with a commonly recommended formula," Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED 433353).  
Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Hillsdale: Lawrence Erlbaum.  
Friedman, H. (1968). "Magnitude of experimental effect and a table for its rapid estimation," Psychological Bulletin, 70(4): 245251.  
Glass, G.V., B. McGaw, and M.L. Smith (1981), MetaAnalysis in Social Research. Sage: Beverly Hills.  
Hedges, L.V. (1981), "Distribution theory for Glass's estimator of effect size and related estimators," Journal of Educational Statistics, 6(2): 106128.  
Hedges, L.V. and I. Olkin (1985), Statistical Methods for MetaAnalysis. London: Academic Press.  
Rosenthal, R. (1984), MetaAnalytic Procedures for Social Research. Newbury Park: Sage  
Rosenthal, R. and R.L. Rosnow (2008), Essentials of Behavioral Research: Methods and Data Analysis, 3rd Edition. New York: McGrawHill.  
Links  
Click here to go to Paul Ellis’s effect size website  
Effect size calculators  
The Result Whacker  
Thresholds for interpreting effect sizes  
The Essential Guide to Effect Sizes  
To send feedback or corrections regarding this page click here.  
How to cite this page: Ellis, P.D. (2009), ¡§Effect size equations,¡¨ website: [insert domain name here] accessed on [insert access date here].  
Last updated: 7 Sept 2009 
