Abstract

Studies of diagnostic accuracy require more sophisticated methods for their meta-analysis than studies of therapeutic interventions. A number of different, and apparently divergent, methods for meta-analysis of diagnostic studies have been proposed, including two alternative approaches that are statistically rigorous and allow for between-study variability: the hierarchical summary receiver operating characteristic (ROC) model (Rutter and Gatsonis, 2001) and bivariate random-effects meta-analysis (van Houwelingen and others, 1993), (van Houwelingen and others, 2002), (Reitsma and others, 2005). We show that these two models are very closely related, and define the circumstances in which they are identical. We discuss the different forms of summary model output suggested by the two approaches, including summary ROC curves, summary points, confidence regions, and prediction regions.

1. INTRODUCTION

There is increasing interest in systematic reviews and meta-analyses of data from diagnostic accuracy studies (Deeks, 2001), (Deville and others, 2002), (Bossuyt and others, 2003), (Khan and others, 2003), (Whiting and others, 2004), (Tatsioni and others, 2005), (Gluud and Gluud, 2005). Typically, the data from each primary study are summarized as a 2×2 table, based on dichotomized test result against true disease status, from which familiar measures such as sensitivity and specificity can be derived.

Several statistical methods for meta-analysis of data from diagnostic test accuracy studies have been proposed (Moses and others, 1993), (Rutter and Gatsonis, 2001), (Dukic and Gatsonis, 2003), (Siadaty and Shu, 2004), (Reitsma and others, 2005). These methods reflect two important characteristics of such data. First, a negative correlation between sensitivity and specificity is expected because of the trade-off between these measures as the test threshold varies (Moses and others, 1993), (Deeks, 2001). Second, and in contrast to meta-analysis of data from randomized controlled trials, substantial between-study heterogeneity is to be expected and must be incorporated in the models (Lijmer and others, 2002). The inferential focus of these methods is also a matter of debate. Some authors propose estimating summary measures of sensitivity and specificity, or prediction regions within which we may expect the results of a future study to lie (Reitsma and others, 2005), while others suggest that in the presence of substantial heterogeneity, the results of meta-analyses should be presented as summary receiver operating characteristic (SROC) curves (Rutter and Gatsonis, 2001).

Littenberg and Moses (1993) see also Moses and others, 1993 proposed a method of generating a SROC curve using simple linear regression that has been frequently used. However, the assumptions of simple linear regression are not met and the method is therefore approximate. There is also uncertainty as to the most appropriate weighting of the regression (Walter, 2002), (Rutter and Gatsonis, 2001).

Two statistically rigorous methods for the meta-analysis of data from diagnostic test accuracy studies have been proposed (Reitsma and others, 2005), (Rutter and Gatsonis, 2001) that overcome these problems but are necessarily more complex. In this paper, we review the characteristics of these methods. We show that although these have been discussed as alternative ways to analyze such data, they are equivalent in many circumstances and hence often lead to identical statistical inferences. Section 2 describes the bivariate model, while Section 3 describes the hierarchical summary receiver operating characteristic (HSROC) model. In Section 4, we explain the relationship between these two models. In Section 5, we discuss the different focus of inference and presentation of model estimates suggested by the two parameterizations. A worked example is presented in Section 6, and the implications of the work are discussed in Section 7.

2. THE BIVARIATE MODEL

The bivariate model is based on an approach to meta-analysis introduced by van Houwelingen and others (1993) (see also van Houwelingen and others, 2002). It has recently been applied to meta-analysis of diagnostic accuracy studies by Reitsma and others (2005).

Following Reitsma and others (2005), we define μAi as the logit-transformed sensitivity in study i, and μBi as the logit-transformed specificity. We use the letter μ where Reitsma and others (2005) used θ to avoid a clash of notation with the HSROC model defined in Section 3. The bivariate model is a random-effects model in which the logit transforms of the true sensitivity and true specificity in each study are assumed to have a bivariate normal distribution across studies, thereby allowing for the possibility of correlation between them (Reitsma and others, 2005):

graphic
(2.1)

Covariates that affect either sensitivity or specificity or both can be included in a natural way by replacing one or both of the means μA and μB by linear predictors in the covariates. For example, for a single covariate Z that may affect both sensitivity and specificity, we could replace μA by μA + νAZi and μB by μB + νBZi.

3. THE HSROC MODEL

The HSROC model (Rutter and Gatsonis, 2001) was motivated by a model for ordinal regression (McCullagh, 1980) that has been used to estimate a receiver operating characteristic (ROC) curve from a single study with data available for multiple thresholds (Tosteson and Begg, 1988). The model is formulated in terms of the probability πij that a patient in study i with disease status j has a positive test result, where j = 0 for a patient without the disease and j = 1 for a patient with the disease. In the usual terminology of diagnostic accuracy studies, πi1 is the true-positive rate or sensitivity in study i, while πi0 is the false-positive rate, equal to 1 − specificity. The HSROC model is defined by separate equations for within-study variation (Level I) and between-study variation (Level II). (The Bayesian formulation originally presented by Rutter and Gatsonis (2001) requires an additional third level specifying the priors for the model parameters.)

3.1 HSROC level I (within study) model

The Level I model for study i takes the form

graphic
(3.1)

where Xij is a dummy variable denoting the true disease status for a patient in study i with disease status j. Rutter and Gatsonis (2001) chose to code Xij=12 for those without disease (j = 0) and +12 for those with disease (j = 1). Both θi and αi are allowed to vary between studies. Rutter and Gatsonis (2001) refer to the θi as “cutpoint parameters" or “positivity criteria," as they model the trade-off between sensitivity and specificity in each study: true-positive rate (sensitivity) and false-positive rate (1 − specificity) both increase with increasing θi. The αi are “accuracy parameters," as they measure the difference between true-positive and false-positive fractions in each study. When β = 0, the diagnostic odds ratio for each study does not depend on the cutpoint parameter θi, and αi is then the log of the diagnostic odds ratio. β is a “scale parameter" or “shape parameter" which models possible asymmetry in the ROC curve by allowing true-positive and false-positive fractions to increase at different rates as θi increases. When β≠0, the diagnostic odds ratio varies with θi even if the accuracy parameter αi is held fixed. β is assumed to be constant across studies, although this assumption can be relaxed somewhat, for example to allow a different value of β in each of several groups of studies (Rutter and Gatsonis, 2001).

3.2 HSROC level II (between study) model

Level II models the variation of the parameters θi and αi between studies. In the simplest case, θi and αi are assumed to have independent Normal distributions, with θi∼N(Θ,σθ2) and αi∼N(Λ,σα2). More generally, the means of the two distributions may be determined by linear functions of study-level covariates. For example, with a single covariate Z that affects both the cutpoint and accuracy parameters,

graphic
(3.2)
graphic
(3.3)

where the coefficients γ and λ express the effect of the covariate Z on the cutpoint and accuracy parameters, respectively. This model may be extended to include more than one covariate, or to allow the covariates that affect the accuracy parameters to differ from those that affect the cutpoint parameters.

4. RELATIONS BETWEEN THE TWO MODELS

We now clarify the relationship between the bivariate and HSROC models. We shall start from the HSROC model. For brevity, first let b = exp(β/2). We can reexpress level I of the HSROC model by splitting (3.1) into separate equations for those with and without disease:

graphic
(4.1)
graphic
(4.2)

The bivariate model is written in terms of μAi and μBi, the logit transforms of sensitivity and specificity in study i. In the notation introduced in Section 3, the sensitivity in study i is πi1 and the specificity is 1 − πi0, so

graphic
(4.3)
graphic
(4.4)

We can therefore relate the random variables that form the basis of the two models:

graphic
(4.5)
graphic
(4.6)

This pair of equations tells us that μAi and μBi are linear combinations of two random variables, θi and αi, which the HSROC model assumes to have independent normal distributions (conditional on any covariates). Any pair of linear combinations of random variables with normal distributions has a bivariate normal distribution see, e.g. Dudewicz and Mishra, 1988, p. 242. Therefore, the HSROC model implies that the joint distribution of μAi and μBi is bivariate normal. So the HSROC model is precisely equivalent to the bivariate model. We give explicit expressions for the relationships between their parameters in the subsections that follow.

We can express the relationship more concisely using matrix notation. We may write (4.5) and (4.6) in a single matrix equation as

graphic
(4.7)

Inverting this,

graphic
(4.8)

S is then the transformation matrix associated with the change from the bivariate model coordinates (logit-transformed sensitivity and specificity) to the HSROC model coordinates (cutpoint and accuracy parameters). Note that S is not orthogonal (S − 1ST). As illustrated in Section 6, it follows that when plotted in bivariate model space (logit-ROC space), the axes corresponding to the coordinates of the HSROC model are not perpendicular to each other.

4.1 Relation between parameters of models with no covariates

We can then express the relationship between the parameters of the two models without covariates in terms of the transformation matrix S by taking the expectation and variance of both sides of (4.8):

graphic
(4.9)
graphic
(4.10)

The assumption of the HSROC model that θi and αi are uncorrelated, i.e. the off-diagonal elements above are zero, fixes the value of b and hence the transformation matrix S. So S is a non-orthogonal transformation that diagonalizes the variance–covariance matrix of the bivariate model. On expanding the right-hand side of (4.10), we find that these off-diagonal elements are zero if and only if b=σB/σA or, equivalently,

graphic
(4.11)

Thus, the shape parameter (β) of the HSROC model is determined solely by the ratio of the variances of logit sensitivity and logit specificity in the bivariate model, and, perhaps surprisingly, is unrelated to their correlation. Equations (4.9) and (4.10) then allow us to relate the other parameters of the HSROC model to those of the bivariate model:

graphic
(4.12)
graphic
(4.13)
graphic
(4.14)
graphic
(4.15)

We can also invert these equations to give the five parameters of the bivariate model in terms of those of the HSROC model:

graphic
(4.16)
graphic
(4.17)
graphic
(4.18)
graphic
(4.19)
graphic
(4.20)

where b = exp(β/2), as defined above.

4.2 Relations between parameters of models with covariates

We now move on to examine the relationship between the models when covariates are included. If the bivariate model is extended to include a single covariate Z that affects both the sensitivity and specificity, (4.10) is unchanged, while (4.9) for the expectation of (4.8) becomes

graphic
(4.21)

This is of the form

graphic
(4.22)

The extension to more than one covariate, with each covariate affecting both accuracy and cutpoint parameters, is straightforward. Therefore, a bivariate model in which one or more covariates affect both sensitivity and specificity is equivalent to an HSROC model in which the same covariates are allowed to affect both accuracy and cutpoint parameters.

However, a bivariate model in which different covariates are allowed to affect sensitivity from specificity, or covariates are included for only sensitivity or only specificity, will not be equivalent to an HSROC model including covariates, unless constraints are imposed on the relationship between the coefficients of the covariates in the HSROC model. The converse is also true.

5. FOCUS OF INFERENCE AND MODEL OUTPUTS

The two approaches lead to different forms of model output appearing more natural.

5.1 HSROC model

The HSROC model gives rise to a SROC curve by allowing the threshold parameter θi to vary while holding the accuracy parameter αi fixed at its mean Λ. For the model without covariates, the expected sensitivity for a given specificity is then given by (Rutter and Gatsonis, 2001), (Macaskill, 2004)

graphic
(5.1)

Rutter and Gatsonis (2001) suggest that the curve is restricted to the observed range of estimated specificities of the studies to discourage extrapolation beyond the data. If β = 0, the curve is symmetric about the “sensitivity = specificity” diagonal. This SROC curve does not depict the uncertainty in any of the parameter estimates and depicts the variability in threshold but not in accuracy.

5.2 Bivariate model

As Reitsma and others (2005) suggest, confidence and prediction regions in ROC space can be constructed using the estimates from the bivariate model. As sensitivity and specificity may be highly correlated, separate confidence intervals for the mean logit sensitivity μA and mean logit specificity μB may be misleading. It is preferable to use an elliptical joint confidence region for both parameters. Such an ellipse is most easily generated using a parametric representation (Douglas, 1993):

graphic
(5.2)
graphic
(5.3)

where sA and sB are the estimated standard errors of μ^A and μ^B, r is the estimate of their correlation, and varying t from 0 to 2π generates the boundary of the ellipse. The constant c has been called the boundary constant of the ellipse (Alexandersson, 2004); asymptotically, to give a 100(1 − α%) confidence region, c=χ2;α2, where χ2;α2 is the upper 100α% point of the χ2 distribution with two degrees of freedom. When the number of studies is small, it may be preferable to use a more conservative approximate confidence region given by c=2f2,n2;α, where n is the number of studies and f2,n − 2;α is the upper 100α% point of the F distribution with degrees of freedom 2 and n − 2 (Douglas, 1993), (Chew, 1966). Such an ellipse in logit-ROC space can then be back-transformed to conventional ROC space to give a confidence region for the summary operating point.

It is also possible to construct a prediction region giving the region which has a given probability (e.g. 95%) of including the “true" sensitivity and specificity of a future study. The covariance matrix for the true logit sensitivity and logit specificity in a future study is

graphic
(5.4)

In practice, both terms must be estimated by fitting the model to the data. The parameters sA, sB, and r in (5.2) and (5.3) can then be replaced by the corresponding quantities derived from this covariance matrix to give the prediction ellipse in logit-ROC space. Again, this can be back-transformed to a prediction region for the true sensitivity and specificity of a future study in conventional ROC space.

6. EXAMPLE: LYMPHANGIOGRAPHY FOR DIAGNOSIS OF LYMPH NODE METASTASIS

As an example, we shall apply both methods to data on 17 studies of lymphangiography for the diagnosis of lymph node metastasis in women with cervical cancer, one of three imaging techniques in the meta-analysis of Scheidler and others (1997) which has been much used as an example data set for methodological papers on diagnostic meta-analysis (Rutter and Gatsonis, 2001), (Macaskill, 2004), (Reitsma and others, 2005). A SROC plot showing the estimates of sensitivity and specificity from the individual studies is shown in Figure 1.

Fig. 1.

SROC plot for the data of Scheidler and others (1997) on lymph node metastasis for diagnosis of cervical cancer. The area of the circles are in proportion to the number of patients in each study.

Fig. 1.

SROC plot for the data of Scheidler and others (1997) on lymph node metastasis for diagnosis of cervical cancer. The area of the circles are in proportion to the number of patients in each study.

We fitted both the bivariate and the HSROC models using the NLMIXED procedure in the statistical software package SAS (SAS Institute Inc., 2003), using code similar to that given by Macaskill (2004) and available from the authors on request. Note that our results differ slightly from those in Reitsma and others (2005) as they use empirical logit transforms and their standard errors followed by the MIXED procedure in SAS, where we choose to model the binomial error structure directly using the NLMIXED procedure.

Table 1 shows the parameter estimates obtained for both models, and the result of applying (4.11)–(4.20) to transform estimates from the HSROC model to the corresponding parameters of the bivariate model and vice versa. The standard errors of the transformed estimates were computed by the delta method using the ESTIMATE statement of the NLMIXED procedure. As can be seen, the results are virtually identical. (The standard errors are identical in theory due to the close relationship between the delta method and maximum likelihood; Cox, 1998; Cox and Hinkley, 1974, Exercise 4.15.) By taking the inverse logit transforms of μA and μB, respectively, and assuming their estimates have a normal distribution, the summary estimate of sensitivity is found to be 0.67 (95% CI, 0.60–0.74) and that of specificity is 0.84 (95% CI, 0.76–0.89). In this example, σAB is estimated to be positive, though with large standard error. This implies a positive correlation between sensitivity and specificity across the studies, not the negative correlation that would be expected if the between-study heterogeneity was due mainly to variation in threshold.

Table 1.

Results of fitting the bivariate and HSROC models to the lymphangiography data

Parameter Estimate (SE) from bivariate model Result of applying (4.16)–(4.20) to HSROC estimates below 
μA 0.7266 (0.1545) 0.7266 (0.1545) 
μB 1.6390 (0.2505) 1.6390 (0.2506) 
σA2 0.1250 (0.1307) 0.1250 (0.1307) 
σB2 0.8233 (0.4056) 0.8236 (0.4058) 
σAB2
 
0.0766 (0.1470)
 
0.0765 (0.1470)
 

 
Estimate (SE) from HSROC model
 
Result of applying (4.12)–(4.15) to bivariate estimates above
 
Θ 0.0706 (0.3271) 0.0706 (0.3271) 
Λ 2.1872 (0.3087) 2.1871 (0.3087) 
β 0.9427 (0.5764) 0.9427 (0.5765) 
σα2 0.7948 (0.5115) 0.7947 (0.5115) 
σθ2 0.1222 (0.1083) 0.1221 (0.1083) 
Parameter Estimate (SE) from bivariate model Result of applying (4.16)–(4.20) to HSROC estimates below 
μA 0.7266 (0.1545) 0.7266 (0.1545) 
μB 1.6390 (0.2505) 1.6390 (0.2506) 
σA2 0.1250 (0.1307) 0.1250 (0.1307) 
σB2 0.8233 (0.4056) 0.8236 (0.4058) 
σAB2
 
0.0766 (0.1470)
 
0.0765 (0.1470)
 

 
Estimate (SE) from HSROC model
 
Result of applying (4.12)–(4.15) to bivariate estimates above
 
Θ 0.0706 (0.3271) 0.0706 (0.3271) 
Λ 2.1872 (0.3087) 2.1871 (0.3087) 
β 0.9427 (0.5764) 0.9427 (0.5765) 
σα2 0.7948 (0.5115) 0.7947 (0.5115) 
σθ2 0.1222 (0.1083) 0.1221 (0.1083) 

SE, standard error.

Table 1.

Results of fitting the bivariate and HSROC models to the lymphangiography data

Parameter Estimate (SE) from bivariate model Result of applying (4.16)–(4.20) to HSROC estimates below 
μA 0.7266 (0.1545) 0.7266 (0.1545) 
μB 1.6390 (0.2505) 1.6390 (0.2506) 
σA2 0.1250 (0.1307) 0.1250 (0.1307) 
σB2 0.8233 (0.4056) 0.8236 (0.4058) 
σAB2
 
0.0766 (0.1470)
 
0.0765 (0.1470)
 

 
Estimate (SE) from HSROC model
 
Result of applying (4.12)–(4.15) to bivariate estimates above
 
Θ 0.0706 (0.3271) 0.0706 (0.3271) 
Λ 2.1872 (0.3087) 2.1871 (0.3087) 
β 0.9427 (0.5764) 0.9427 (0.5765) 
σα2 0.7948 (0.5115) 0.7947 (0.5115) 
σθ2 0.1222 (0.1083) 0.1221 (0.1083) 
Parameter Estimate (SE) from bivariate model Result of applying (4.16)–(4.20) to HSROC estimates below 
μA 0.7266 (0.1545) 0.7266 (0.1545) 
μB 1.6390 (0.2505) 1.6390 (0.2506) 
σA2 0.1250 (0.1307) 0.1250 (0.1307) 
σB2 0.8233 (0.4056) 0.8236 (0.4058) 
σAB2
 
0.0766 (0.1470)
 
0.0765 (0.1470)
 

 
Estimate (SE) from HSROC model
 
Result of applying (4.12)–(4.15) to bivariate estimates above
 
Θ 0.0706 (0.3271) 0.0706 (0.3271) 
Λ 2.1872 (0.3087) 2.1871 (0.3087) 
β 0.9427 (0.5764) 0.9427 (0.5765) 
σα2 0.7948 (0.5115) 0.7947 (0.5115) 
σθ2 0.1222 (0.1083) 0.1221 (0.1083) 

SE, standard error.

Figure 2 shows the 95% confidence region for the summary operating point and a 95% prediction region for the true operating point in a single future study in both logit-transformed ROC space (left panel) and back-transformed to conventional ROC space (right panel). The prediction region covers a greater range of specificity than sensitivity, in contrast to the estimates from the separate studies shown in the SROC plot in Figure 1, which exhibit more variation in estimated sensitivity than specificity. This is due to the fact that most of the studies had a considerably larger number of patients with negative results on the reference test than positive results, leading to greater sampling variability in the estimates of sensitivity than specificity. The prediction region is for the true sensitivity and specificity in a future study, not the estimated values.

Fig. 2.

Summary points, lines, and regions in logit-transformed ROC space (left) and conventional ROC space (right). Filled circle: summary point. Solid line: SROC curve. Dotted line: boundary of the confidence region for the summary point. Dashed line: boundary of prediction region. The left-hand panel also shows the HSROC model coordinate axes in logit-transformed ROC space. Note that these axes do not align with the major or minor axes of the ellipse.

Fig. 2.

Summary points, lines, and regions in logit-transformed ROC space (left) and conventional ROC space (right). Filled circle: summary point. Solid line: SROC curve. Dotted line: boundary of the confidence region for the summary point. Dashed line: boundary of prediction region. The left-hand panel also shows the HSROC model coordinate axes in logit-transformed ROC space. Note that these axes do not align with the major or minor axes of the ellipse.

Also shown in Figure 2 is the SROC curve (a straight line in logit-transformed ROC space). Note that the SROC curve takes a conventional shape despite the positive estimate of the correlation between sensitivity and specificity. The left-hand panel also shows the HSROC coordinate axes in logit-transformed ROC space. Note that these axes do not align with the major or minor axes of the ellipse. The θ axis is parallel to the ROC curve, while its horizontal reflection is parallel to the α axis. The method of Littenberg and Moses (1993) (using unweighted linear regression) gives a curve similar to, but slightly above, the HSROC curve, as shown in Macaskill (2004).

7. DISCUSSION

We have shown that the HSROC model and the bivariate random-effects model for meta-analysis of diagnostic accuracy studies are very closely related, and in common situations identical. In the absence of study-level covariates, they are different parameterizations of the same model. The bivariate model allows inclusion of covariates that affect sensitivity or specificity or both, while the HSROC model allows covariates that affect accuracy or threshold parameters or both. An HSROC model that allows one or more covariates to affect both accuracy and threshold parameters is equivalent to a bivariate model that allows the same covariates to affect both sensitivity and specificity. However, the HSROC model can be more easily extended to include a covariate to affect the degree of asymmetry of the SROC curve.

The models may differ in the options for introducing greater model parsimony by dropping or combining parameters: The HSROC framework allows the analyst to drop the random effect for the accuracy parameter and assume this is fixed across all studies, and hence that only the threshold parameter varies between studies. This corresponds to perfect negative correlation between the logit transforms of sensitivity and specificity in the bivariate model (σAB = − σAσB). The confidence and prediction regions then collapse to lie along the SROC curve. The HSROC framework also allows the assumption of a symmetric SROC curve with constant diagnostic odds ratio by setting β = 0, which in the bivariate model corresponds to equal variances of logit sensitivity and logit specificity (σA2 = σB2). The ability to enforce such constraints on bivariate model parameters may vary between software packages. By contrast, it does not appear natural to set any of the parameters of the bivariate model to zero. One practical advantage of the bivariate model is that it can be fitted in a wider range of software, for example MLwiN, SAS, or the Stata package “gllamm" (Rabe-Hesketh and others, 2004), whereas the HSROC model is at present only estimable using WinBUGS or the NLMIXED procedure in SAS.

As we have seen, the different parameterizations of the HSROC and bivariate models arise from different ideas of the most appropriate meta-analytic summaries of the results of diagnostic test accuracy studies, and have primarily been used to produce these chosen summaries. The HSROC parameterization naturally leads to a SROC curve when the threshold parameter θ is allowed to vary between studies but the accuracy parameter α is fixed at its mean. This may be reasonable when there is little or no detectable heterogeneity in the accuracy parameter, i.e. σα2 is estimated to be close to zero, or when there is considerably greater variability in threshold than in accuracy. The bivariate model parameterization naturally leads to a summary operating point, i.e. a summary sensitivity and specificity, together with confidence intervals for each or a joint confidence region for both together. When there is a considerable degree of between-study heterogeneity, as is common in meta-analysis of diagnostic accuracy studies, a prediction region may be preferable to a confidence region.

In our example in Section 6, fitting both models to the same data gave near-identical results in agreement with the formulae derived in Section 4.1, when both models were fitted using the NLMIXED procedure in SAS. However, such close agreement may not always be found in practice, particularly if the models are fitted using different approaches in different software. Rutter and Gatsonis (2001) originally proposed fitting the HSROC model using a Bayesian Markov chain Monte-Carlo method. Unlike maximum likelihood estimates, Bayesian posterior means or medians are not invariant under nonlinear transformations such as those in Section 4.1. Reitsma and others (2005) fit the bivariate model using the MIXED procedure in SAS, which, unlike the NLMIXED procedure, requires first calculating empirical estimates of the logit transforms of sensitivity and specificity and their standard errors, treating the latter as fixed and approximating the within-study variability of the logits by a normal distribution. This approach is less computationally demanding but involves some degree of approximation when the study sizes are small. In addition, regardless of the method of estimation, there is typically little information on the covariance parameter σAB of the bivariate model unless there are many studies of reasonable size with considerable variation in sensitivity and specificity between them. Its estimation may therefore prove troublesome (R. Riley and others, in preparation).

Another reason for apparent discrepancies between results in previous publications is that when fitting models to the data of Scheidler and others (1997), authors have made different assumptions about the equality of parameters between the three imaging techniques assessed, of which for simplicity we have only considered one, lymphangiography, in the example here. Rutter and Gatsonis (2001) allowed all five parameters of the HSROC model to differ between the three imaging techniques. Macaskill (2004) assumed that the two variance parameters σα2 and σθ2 were the same while the other three parameters Λ, Θ, and β differed. Reitsma and others (2005) estimated a bivariate model in which the three variance–covariance parameters σA, σB, and σAB are the same for the three imaging techniques and the two location parameters μA and μB differ, thereby constraining the three SROC curves to have the same degree of asymmetry.

It may initially seem surprising that (4.11) for β, the shape parameter of the HSROC model, does not involve the covariance σAB of the bivariate model but only the ratio of the variances. In fact, σAB only enters (4.14) and (4.15) for the variances of the HSROC parameters. It follows that the equation for the SROC curve given by the HSROC model does not require this covariance. It is therefore possible to use (4.11), (4.13), and (5.1) to estimate the equation of the SROC curve from separate conventional “univariate” random-effects meta-analyses of logit-transformed sensitivity and logit-transformed specificity. These could be performed using any of the widely available packages for random-effects meta-analysis. The estimates obtained by such an approach will not be identical to those from the bivariate model as joint marginal normality of two random variables does not imply they have a bivariate normal distribution. However, if the bivariate normal model does hold, separate univariate analyses should give consistent estimates of the means and variances, with only a slight loss in efficiency (Riley and others, 2006). Separate univariate analyses may therefore provide an alternative to the method of Littenberg and Moses (1993) as a way of generating a SROC curve using widely available algorithms. Separate univariate analyses may also be useful in providing starting values for the iterative procedures required to fit either the bivariate or the HSROC models, which may aid convergence.

There is empirical evidence that aspects of the design and conduct of diagnostic accuracy studies can lead to bias or increased variation in their results. Exploration of potential sources of heterogeneity is therefore a crucial component of systematic reviews of such studies. Sources of between-study heterogeneity may include differences in patient selection and clinical setting, disease severity, specifics of the index and reference tests, and interobserver variability (Lijmer and others, 1999), (Whiting and others, 2004). The expected effect of a covariate on test performance may lead to a preference for one of the two parameterizations implied by the HSROC and bivariate models. For example, “spectrum bias," in which the subjects included in a study are not representative of the patients who will receive the test in practice (Whiting and others, 2003), might be expected to affect test accuracy rather than threshold, and might therefore be most appropriately investigated using the HSROC approach. Conversely, between-study variation in disease severity will affect sensitivity but not specificity, leading to a preference for the bivariate approach. For most study characteristics, however, there are few a priori reasons to prefer one approach over the other; further empirical research on this issue is needed.

The methods explored in this paper assume that only summary data from each study are available in the form of a 2×2 table. Meta-analysis of individual patient data may offer particular advantages for diagnostic research (Khan and others, 2003). It would allow differences in patient spectra to be properly accounted for, and enable assessment of the additional information provided by a test above that already known from patient history and clinical examination. For test results that are originally numerical or ordered categorical, it would also capture within-study information about the ROC curve that is lost when a particular threshold is chosen and the results collapsed into a summary 2×2 table.

In summary, we have demonstrated that the HSROC and bivariate models are very closely related and often identical. The parameter estimates from either model can be used to produce a summary operating point, an SROC curve, confidence regions, or prediction regions. The choice between these parameterizations depends partly on the degrees of and reasons for between-study heterogeneity. Empirical evidence about this would be useful in guiding analysts.

We wish to acknowledge helpful discussions with Petra Macaskill. This work was supported by the MRC Health Services Research Collaboration. Jonathan J. Deeks is funded in part by a Senior Scientist in Evidence Synthesis Award from the UK Department of Health. Conflict of Interest: None declared.

References

Alexandersson
A
Graphing confidence ellipses: an update of ellip for Stata 8
Stata Journal
2004
, vol. 
4
 (pg. 
242
-
56
)
Bossuyt
PM
Reitsma
JB
Bruns
DE
Gatsonis
CA
Glasziou
PP
Irwig
LM
Lijmer
JG
Moher
D
Rennie
D
de Vet
HCW
Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative
British Medical Journal
2003
, vol. 
326
 (pg. 
41
-
4
)
Chew
V
Confidence, prediction, and tolerance regions for the multivariate normal distribution
Journal of the American Statistical Association
1966
, vol. 
61
 (pg. 
605
-
17
)
Cox
C
Armitage
P
Colton
T
Delta method
Encyclopedia of Biostatistics
1998
1st edition
Chichester, UK
Wiley
(pg. 
1125
-
1127
)
Cox
DR
Hinkley
DV
Theoretical Statistics
1974
London
Chapman and Hall
Deeks
JJ
Systematic reviews in health care: systematic reviews of evaluations of diagnostic and screening tests
British Medical Journal
2001
, vol. 
323
 (pg. 
157
-
62
)
Deville
W
Buntinx
F
Bouter
L
Montori
V
de Vet
H
van der Windt
D
Bezemer
P
Conducting systematic reviews of diagnostic studies: didactic guidelines
BMC Medical Research Methodology
2002
, vol. 
2
 pg. 
9
 
Douglas
JB
Confidence regions for parameter pairs
American Statistician
1993
, vol. 
47
 (pg. 
43
-
5
)
Dudewicz
EJ
Mishra
SN
Modern Mathematical Statistics
1988
New York
Wiley
Dukic
V
Gatsonis
C
Meta-analysis of diagnostic test accuracy assessment studies with varying number of thresholds
Biometrics
2003
, vol. 
59
 (pg. 
936
-
46
)
Gluud
C
Gluud
LL
Evidence based diagnostics
British Medical Journal
2005
, vol. 
330
 (pg. 
724
-
6
)
Khan
KS
Bachmann
LM
ter Riet
G
Systematic reviews with individual patient data meta-analysis to evaluate diagnostic tests
European Journal of Obstetrics & Gynecology and Reproductive Biology
2003
, vol. 
108
 (pg. 
121
-
5
)
Lijmer
JG
Bossuyt
PMM
Heisterkamp
SH
Exploring sources of heterogeneity in systematic reviews of diagnostic tests
Statistics in Medicine
2002
, vol. 
21
 (pg. 
1525
-
37
)
Lijmer
JG
Mol
BW
Heisterkamp
S
Bonsel
GJ
Prins
MH
van der Meulen
JH
Bossuyt
PM
Empirical evidence of design-related bias in studies of diagnostic tests
Journal of the American Medical Association
1999
, vol. 
282
 (pg. 
1061
-
6
)
Littenberg
B
Moses
LE
Estimating diagnostic accuracy from multiple conflicting reports: a new meta-analytic method
Medical Decision Making
1993
, vol. 
13
 (pg. 
313
-
21
)
Macaskill
P
Empirical Bayes estimates generated in a hierarchical summary ROC analysis agreed closely with those of a full Bayesian analysis
Journal of Clinical Epidemiology
2004
, vol. 
57
 (pg. 
925
-
32
)
McCullagh
P
Regression models for ordinal data
Journal of the Royal Statistical Society, Series B, Methodological
1980
, vol. 
42
 (pg. 
109
-
42
)
Moses
LE
Shapiro
D
Littenberg
B
Combining independent studies of a diagnostic test into a summary ROC curve: data-analytic approaches and some additional considerations
Statistics in Medicine
1993
, vol. 
12
 (pg. 
1293
-
316
)
Rabe-Hesketh
S
Pickles
A
Skrondal
A
GLLAMM manual
U.C. Berkeley Division of Biostatistics Working Paper Series
2004
 
Reitsma
JB
Glas
AS
Rutjes
AWS
Scholten
RJPM
Bossuyt
PM
Zwinderman
AH
Bivariate analysis of sensitivity and specificity produces informative summary measures in diagnostic reviews
Journal of Clinical Epidemiology
2005
, vol. 
58
 (pg. 
982
-
90
)
Riley
RD
Abrams
KR
Lambert
P
Sutton
AJ
Thompson
JR
An evaluation of bivariate random-effects meta-analysis for the joint synthesis of two correlated outcomes
Statistics in Medicine
2006
 
doi:10.1002/sim.2524
Rutter
CM
Gatsonis
CA
A hierarchical regression approach to meta-analysis of diagnostic test accuracy evaluations
Statistics in Medicine
2001
, vol. 
20
 (pg. 
2865
-
84
)
SAS Institute Inc
The SAS System for Windows. Version 9.1
2003
Cary, NC
SAS Institute Inc
Scheidler
J
Hricak
H
Yu
KK
Subak
L
Segal
MR
Radiological evaluation of lymph node metastases in patients with cervical cancer. a meta-analysis
Journal of the American Medical Association
1997
, vol. 
278
 (pg. 
1096
-
101
)
Siadaty
M
Shu
J
Proportional odds ratio model for comparison of diagnostic tests in meta-analysis
BMC Medical Research Methodology
2004
, vol. 
4
 pg. 
27
 
Tatsioni
A
Zarin
DA
Aronson
N
Samson
DJ
Flamm
CR
Schmid
C
Lau
J
Challenges in systematic reviews of diagnostic technologies
Annals of Internal Medicine
2005
, vol. 
142
 (pg. 
1048
-
55
)
Tosteson
AN
Begg
CB
A general regression methodology for ROC curve estimation
Medical Decision Making
1988
, vol. 
8
 (pg. 
204
-
15
)
van Houwelingen
H
Arends
LR
Stijnen
T
Advanced methods in meta-analysis: multivariate approach and meta-regression
Statistics in Medicine
2002
, vol. 
21
 (pg. 
589
-
624
)
van Houwelingen
HC
Zwinderman
KH
Stijnen
T
A bivariate approach to meta-analysis
Statistics in Medicine
1993
, vol. 
12
 (pg. 
2273
-
84
)
Walter
SD
Properties of the summary receiver operating characteristic (SROC) curve for diagnostic test data
Statistics in Medicine
2002
, vol. 
21
 (pg. 
1237
-
56
)
Whiting
P
Rutjes
AW
Reitsma
JB
Bossuyt
PM
Kleijnen
J
The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews
BMC Medical Research Methodology
2003
, vol. 
3
 pg. 
25
 
Whiting
P
Rutjes
AWS
Reitsma
JB
Glas
AS
Bossuyt
PMM
Kleijnen
J
Sources of variation and bias in studies of diagnostic accuracy—a systematic review
Annals of Internal Medicine
2004
, vol. 
140
 (pg. 
189
-
202
)