help conovertest
------------------------------------------------------------------------------------------------------------------------------------
Title
conovertest -- Conover-Iman test of multiple comparisons using rank sums
Syntax
conovertest varname [if] [in] , by(groupvar) [ma(method) nokwallis nolabel wrap list level(#)]
conovertest options Description
------------------------------------------------------------------------------------------------------------------------------
Main
by(groupvar) variable defining the k groups. Missing observations in groupvar are ignored.
ma(method) which method to adjust for multiple comparisons
nokwallis suppress Kruskal-Wallis test output
nolabel display data values, rather than data value labels
wrap do not break wide tables
list include results of the Conover-Iman test in a list format.
rmc report row - col rather than col - row
level(#) set confidence level; default is level(95)
altp use alternative expression of p-values
------------------------------------------------------------------------------------------------------------------------------
Missing observations in varname are ignored.
Description
conovertest reports the results of the Conover-Iman test (Conover & Iman, 1979; Conover, 1999) for stochastic dominance among
multiple pairwise comparisons following rejection of a Kruskal-Wallis test for stochastic dominance among k groups (Kruskal
and Wallis, 1952) using kwallis. The Conover-Iman test is akin to Dunn's test (1964) but is based on the t distribution and
is derived from the Kruskal-Wallis, test statistic, rather than the z distribution as Dunn's test statistic is, and can
provide much greater statistical power than Dunn's test. The interpretation of stochastic dominance requires an assumption
that the CDF of one group does not cross the CDF of the other. conovertest performs m = k(k-1)/2 multiple pairwise
comparisons. The null hypothesis in each pairwise comparison is that the probability of observing a random value in the first
group that is larger than a random value in the second group equals one half; this null hypothesis corresponds to that of the
Wilcoxon-Mann-Whitney rank-sum test. Like the rank-sum test, if the data can be assumed to be continuous, and the
distributions are assumed identical except for a shift in centrality, the Conover-Iman test may be understood as a test for
median difference. In the syntax diagram above, varname refers to the variable recording the outcome, and groupvar refers to
the variable denoting the population. conovertest accounts for tied ranks. by() is required.
conovertest outputs both t test statistics for each pairwise comparison (corresponding to the column mean minus the row mean,
unless the rmc option is used) and the p-value = P(T>|t|) for each. Reject Ho based on p <= alpha/2 (and in combination with
p-value ordering for stepwise ma options). If you prefer to work with p-values expressed as p = P(|T| >=|t|) use the altp
option, and reject Ho based on p <= alpha (and in combination with p-value ordering for stepwise ma options). These are
exactly equivalent rejection decisions).
conovertest reports both t test statistics for each pairwise comparison and the p-value = P(T>|t|) for each.
Options
by(groupvar) is required. It specifies a variable that identifies the groups.
ma(method) is required. It specifies the method of adjustment used for multiple comparisons, and must take one of the
following values: none, bonferroni, sidak, hochberg, hs, bh, or by. none is the default method assumed if the ma option
is omitted. These methods perform as follows:
none specifies no adjustment for multiple comparisons be made.
bonferroni specifies a "Bonferroni adjustment" where the family-wise error rate (FWER) is adjusted by multiplying the
p-values in each pairwise test by m (the total number of pairwise tests) as per Dunn (1961). conovertest will report a
maximum Bonferroni-adjusted p-value of 1. Those comparisons rejected with this method at the alpha level (two-sided test)
are underlined in the output table, and starred in the list using the list option.
sidak specifies a "Sidák adjustment" where the FWER is adjusted by replacing the p-value of each pairwise test with 1 - (1 -
p)^m as per Sidák (1967). conovertest will report a maximum Sidák-adjusted p-value of 1.
holm specifies a "Holm adjustment" where the FWER is adjusted sequentially by adjusting the p-values of each pairwise test
as ordered from smallest to largest with p(m+1-i), where i is the position in the ordering as per Holm (1979). conovertest
will report a maximum Holm-adjusted p-value of 1. Because in sequential tests the decision to reject the null hypothesis
depends both on the p-values and their ordering, those comparisons rejected with this method at the alpha level (two-sided
test) are underlined in the output table, and starred in the list when using the list option.
hs specifies a "Holm-Sidák adjustment" where the FWER is adjusted sequentially by adjusting the p-values of each pairwise
test as ordered from smallest to largest with 1 - (1 - p)^(m+1-i), where i is the position in the ordering (see Holm, 1979).
conovertest will report a maximum Holm-Sidák-adjusted p-value of 1. Because in sequential tests the decision to reject the
null hypothesis depends both on the p-values and their ordering, those comparisons rejected with this method at the alpha
level (two-sided test) are underlined in the output table, and starred in the list when using the list option.
hochberg specifies a "Hochberg adjustment" where the FWER is adjusted sequentially by adjusting the p-values of each
pairwise test as ordered from largest to smallest with p*i, where i is the position in the ordering as per Hochberg (1988).
conovertest will report a maximum Hochberg-adjusted p-value of 1. Because in sequential tests the decision to reject the
null hypothesis depends both on the p-values and their ordering, those comparisons rejected with this method at the alpha
level (two-sided test) are underlined in the output table, and starred in the list when using the list option.
bh specifies a "Benjamini-Hochberg adjustment" where the false discovery rate (FDR) is adjusted sequentially by adjusting
the p-values of each pairwise test as ordered from largest to smallest with p[m/(m+1-i)], where i is the position in the
ordering (see Benjamini & Hochberg, 1995). conovertest will report a maximum Benjamini-Hochberg-adjusted p-value of 1.
Such FDR-adjusted p-values are sometimes refered to as q-values in the literature. Because in sequential tests the decision
to reject the null hypothesis depends both on the p-values and their ordering, those comparisons rejected with this method
at the alpha level (two-sided test) are underlined in the output table, and starred in the list when using the list option.
by specifies a "Benjamini-Yekutieli adjustment" where the false discovery rate (FDR) is adjusted sequentially by adjusting
the p-values of each pairwise test as ordered from largest to smallest with p[m/(m+1-i)]C, where i is the position in the
ordering, and C = 1 + 1/2 + ... + 1/m (see Benjamini & Yekutieli, 2001). conovertest will report a maximum
Benjamini-Yekutieli-adjusted p-value of 1. Such FDR-adjusted p-values are sometimes refered to as q-values in the
literature. Because in sequential tests the decision to reject the null hypothesis depends both on the p-values and their
ordering, those comparisons rejected with this method at the alpha level (two-sided test) are underlined in the output
table, and starred in the list when using the list option.
nokwallis suppresses the display of the Kruskal-Wallis test table.
nolabel causes the actual data codes to be displayed rather than the value labels in the Conover-Iman test tables.
wrap requests that conovertest not break up wide tables to make them readable.
list requests that conovertest also provide a output in list form, one pairwise test per line.
rmc requests that conovertest reports t statistic based on the mean rank of the row variable minus the mean rank of the column
variable. The default is to report the mean rank of the column variable minus the mean rank of the row variable. The
difference between these two is simply the sign of the t statistic.
level(#) specifies the compliment of alpha*100. The default, level(95) (or as set by set level) corresponds to alpha = 0.05.
altp directs conovertest to express p-values in alternative format. The default is to express p = P(T >= |t|), and reject Ho
if p >= alpha\2. When the altp option is used, p-values are instead expressed as p = P(|T| >= |t|), and reject Ho if p <=
alpha. These two expressions give identical test results. Use of altp is therefore merely a semantic choice.
Example
Setup
. webuse census
Test for equal median age by region type simultaneously
. kwallis medage, by(region)
Conover-Iman multiple-comparison test for stochastic dominance using a Bonferroni correction
. conovertest medage, by(region) ma(bonferroni) nokwallis
Saved results
conovertest saves the following in r():
Scalars
r(df) degrees of freedom for the Kruskal-Wallis test
r(chi2_adj) chi-squared adjusted for ties for the Kruskal-Wallis test
Matrices
r(Z) vector of Conover-Iman t test statistics
r(P) vector of adjusted p-values for Conover-Iman t test statistics, --OR--
r(altP) vector of adjusted p-values for Conover-Iman t test statistics when using the altp option
Author
Alexis Dinno
Portland State University
alexis.dinno@pdx.edu
Please contact me with any questions, bug reports or suggestions for improvement. Fixing bugs will be facilitated by sending
along:
(1) a copy of the data (de-labeled or anonymized is fine),
(2) a copy of the command used, and
(3) a copy of the exact output of the command.
Suggested citation
Dinno A. 2017. conovertest: Conover-Iman test of multiple comparisons using rank sums. Stata software package. URL:
https://alexisdinno.com/stata/conovertest.html
References
Benjamini, Y. and Hochberg, Y. 1995. Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple
Testing. Journal of the Royal Statistical Society. Series B (Methodological). 57: 289-300.
Benjamini, Y. and Yekutieli, D. 2001. The control of the false discovery rate in multiple testing under dependency. Annals of
Statistics. 29: 1165-1188.
Conover, W. J. and Iman, R. L. 1979. On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific
Laboratory.
Conover, W. J. 1999. Practical Nonparametric Statistics. Wiley, Hoboken, NJ, 3rd edition.
Dunn, O. J. 1961. Multiple comparisons among means. Journal of the American Statistical Association. 56: 52-64.
Dunn, O. J. 1964. Multiple comparisons using rank sums. Technometrics. 6: 241-252.
Hochberg, Y. 1988. A sharper Bonferroni procedure for multiple tests of significance. Biometrika. 75: 800-802.
Holm, S. 1979. A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics. 6: 65-70.
Kruskal, W. H. and Wallis, A. 1952. Use of ranks in one-criterion variance analysis. Journal of the American Statistical
Association. 47: 583-621.
Sidák, Z. 1967. Rectangular confidence regions for the means of multivariate normal distributions. Journal of the American
Statistical Association. 62: 626-633.
Also See
Help: kwallis, ranksum, dunntest