Abstract

There has recently been considerable interest in establishing relationships between environmental variables and annual recruitment to fish stocks. Such relationships have the potential to reduce the uncertainty in the assessment of the stocks. When many environmental variables are considered, it is easy to draw conclusions that exaggerate the ability to predict recruitment. One technique to protect against this is cross-validation. This technique has usually been incorrectly applied, in that it has not included predictor screening (the selection from a large set of potential predictors of a smaller set to use in prediction). A simulation experiment is used to show that this omission can cause chance correlations to be wrongly identified as useful, and the reliability of useful predictors to be overestimated. It also shows that the mistaken use of chance correlations to predict recruitment can be worse than the use of the default predictor (the mean of previous recruitments), and that our ability to measure the reliability of recruitment predictors is typically poor.

Introduction

In recent decades there has been considerable interest in finding correlations between environmental variables and annual recruitment to fish stocks. Once such correlations have been established, they can both increase the accuracy of estimates of recruitment for year classes that have already entered the fishery, and allow prediction for those that will recruit in the near future. This can reduce the uncertainty in the assessment of stocks. Despite a great deal of research, it appears that progress in this field has been rather limited. Although apparent environment–recruitment relationships have been found for many fish stocks, a high proportion of these has not been verified by subsequent testing ( Myers, 1998 ).

There are two explanations for these failed verifications. It could be that a claimed environment–recruitment relationship was reliable, but is no longer so, because of some change in the dynamics relating the fish stock to its environment. Alternatively, perhaps the relationship was not reliable in the first place. This paper considers the second possibility. However, it ignores the many studies in which no attempt was made to validate the environment–recruitment relationships that were identified. Given short time-series of recruitment and sufficient environmental variables, it is not difficult to find chance correlations that will disappear when more observations are available. Therefore, it is not surprising when these invalidated relationships subsequently prove to be false. The focus of the present study is the misapplication of a common method of validating environment–recruitment relationships. This can lead to the mistaken validation of chance correlations, and an exaggeration of the strength of true correlations. I also show that the mistaken use of an invalid environment–recruitment relationship can be worse than using none.

The most common technique for measuring the reliability of these relationships is cross-validation ( Mosteller and Tukey, 1977 ). This is a repeated leave-one-out procedure. For each year in the data set, an environment–recruitment relationship is constructed after excluding all data for that year, and this relationship is used to estimate the recruitment in that year. A reliability score, often the correlation, r , or its square, r2 , is constructed to compare these estimates with the actual recruitments (e.g. Dippner, 1997 ; Williams and Quinn, 2000 ; Beentjes and Renwick, 2001 ; Bull and Livingston, 2001 ).

Cross-validation has some validity because it mimics a real-world process. Notice how at the i (th) step in the cross-validation, we first construct a recruitment predictor using data from a set of years (all years except the i (th)), then apply that predictor to a year outside that set (the i (th) year). The same construct-then-apply sequence happens in the real world. We generally construct a predictor using all data at hand, then expect to apply this predictor in the future, when predictor data become available for years outside the set originally considered. The cross-validation is carried out after the construction of a real-world predictor, but before its application to new data. It can give us a measure of the likely reliability of this application only if it faithfully mimics all stages in the construction of the real-world predictor. If it omits any stage, it will be, to some extent, defective.

One important stage in the construction of many recruitment predictors is predictor screening. The first stage is to assemble a set of potential predictors. This set may be quite large ( n > 100) because of the ability to create many predictors from a single environmental variable (such as sea surface temperature) by averaging over different time periods (e.g. one predictor for each of the four seasons) or areas, and considering a range of time lags. The second stage, predictor screening, is to select a (usually small) subset of these predictors to be used in predicting the recruitment of the fish stock of interest. The screening procedure may be simple – e.g. stepwise regression ( Megrey et al ., 1995 ; Bull and Livingston, 2001 ) or principal components analysis ( Daskalov, 1999 ) – or multi-stage and complex (e.g. Ueta et al ., 1999 ; Williams and Quinn, 2000 ).

In this paper I demonstrate, using a simple simulation experiment, that omitting predictor screening from a cross-validation can result in serious errors. Chance correlations may be wrongly identified as useful for recruitment predictions, and the reliability of true recruitment–environment relationships may be substantially overestimated. This problem does not seem to be widely understood. I found five environment–recruitment studies using cross-validation. Of these, only one ( Beentjes and Renwick, 2001 ) included predictor screening in their cross-validation.

Material and methods

Predictors

The 48 predictors used in the simulation experiment were updated versions of those used by Bull and Livingston (2001) in attempting to predict the year-class strength (YCS) of the western stock of hoki ( Macruronus novaezelandiae ) in New Zealand ( Table 1 ). These were all derived from seasonal climate variables for the winter spawning season and the adjacent two seasons (spring and autumn). The weather-pattern predictors derive from a cluster analysis by Kidson (2000) , who classified the air circulation in the New Zealand region into 12 patterns. Each predictor of this type is the proportion of days that one of these patterns occurred within a specified season. For the present study, all predictors were updated and extended from the period 1980–1996 (used by Bull and Livingston, 2001 ) to the years 1978–2000. The additional mixed-layer depth predictor used in the earlier study (but not found to be useful) was not used here because it could not be extended back to 1978.

Table 1

Description of predictors used in the simulation experiment. The same three seasons (autumn, winter, and spring) were represented in all variables.

Variable Number of predictors Details 
Southern oscillation index (SOI) 3 seasons 
Weather patterns 36 12 patterns × 3 seasons 
Mean sea surface temperature (SST) 3 seasons 
Mean windspeed 2 directions (NW, SE) × 3 seasons 
All 48  
Variable Number of predictors Details 
Southern oscillation index (SOI) 3 seasons 
Weather patterns 36 12 patterns × 3 seasons 
Mean sea surface temperature (SST) 3 seasons 
Mean windspeed 2 directions (NW, SE) × 3 seasons 
All 48  
Table 1

Description of predictors used in the simulation experiment. The same three seasons (autumn, winter, and spring) were represented in all variables.

Variable Number of predictors Details 
Southern oscillation index (SOI) 3 seasons 
Weather patterns 36 12 patterns × 3 seasons 
Mean sea surface temperature (SST) 3 seasons 
Mean windspeed 2 directions (NW, SE) × 3 seasons 
All 48  
Variable Number of predictors Details 
Southern oscillation index (SOI) 3 seasons 
Weather patterns 36 12 patterns × 3 seasons 
Mean sea surface temperature (SST) 3 seasons 
Mean windspeed 2 directions (NW, SE) × 3 seasons 
All 48  

Recruitment vectors

Two sets of 20 vectors of recruitment were simulated, each vector covering the 23-year period 1978–2000 (the term recruitment is used here for simplicity; what was estimated in the original study was actually log(YCS), but this is just a transformation of recruitment). The first set, which represented the possibility that recruitment was random, and thus unrelated to our climate predictors, was generated from a standard normal distribution. The second set, which represented the possibility of strongly climate-driven recruitment, was based on the regression equation estimated by Bull and Livingston (2001) and used the regression coefficients given in their Table 5. This equation used five predictors: autumn SOI, winter SST, and three weather-pattern predictors. When the equation was applied to the predictors for the years 1978–2000, it produced a recruitment vector with variance 0.73. To this was added another vector of the same length from a normal distribution with zero mean and variance 0.73/4. This created a simulated recruitment vector with total variance 0.91 (=0.73 + 0.73/4), 80% of which comes from climate variation. The simulation procedure was repeated 19 times to create 20 such vectors.

Cross-validation

For each simulated recruitment vector, two types of cross-validation – full and partial – were carried out. Full cross-validation repeated the following four steps iteratively for each of the 23 years in the data set.

  1. Drop all data for year i from the predictors and recruitment vector.

  2. Use forward stepwise regression to select the best predictors from the set of 48 potential predictors in Table 1 .

  3. Calculate a regression equation using these best predictors.

  4. Use this equation to estimate the recruitment in year i .

It can happen that no predictor is selected at step 2. In this case the estimated recruitment at step 4 was what I call the default estimate, which is just the mean of the recruitments in all years except year i .

For partial cross-validation, the predictor screening occurred before, rather than as part of, the iterative procedure. That is, a single set of best predictors was selected (by forward stepwise regression) using the whole data set, and step 2 was omitted from the iterative procedure. Thus, the same set of best predictors was used at step 3 for each iteration of the partial cross-validation.

In the stepwise regression (used at step 2), a rule is needed to decide when to stop adding new predictors. The rule used in this study was the default one in version 3.4 of Splus, which is based on the C p statistic ( Mallows, 1973 ). Francis et al . (2005) give details of this rule, as well as an analysis of the effect of using a different rule.

A per cent variance explained (PVE) statistic was calculated for each simulated recruitment vector and each type of cross-validation. This statistic measures how much better a regression-based estimator of recruitment would be than the default estimator. Thus,  
formula
where the MSE (mean square error) of an estimator is given by forumla , and r i and forumla are the true (simulated) and estimated recruitments, respectively, in year i . For MSE regression , forumla is calculated at step 4 above; for MSE default , forumla .

Note that PVE is a generalization of the r2 statistic commonly used to measure the predictive power of a regression equation. The two statistics are identical (assuming r2 is expressed as a percentage) if they are calculated directly from the regression equation based on all years' data (i.e. without cross-validation).

Varying the numbers of predictors and years

In the initial scenario there were 48 predictors over 23 years. To see the effect of varying these quantities, the cross-validations were repeated for two alternative scenarios: five predictors over 23 years (restricting to the five predictors used in the regression of Bull and Livingston, 2001 ), and 48 predictors over 13 years (restricting the predictors and recruitment vectors to the initial 13 years, 1978–1990). These additional scenarios used the same simulated recruitment vectors as were used in the first scenario.

Results

All simulation results are plotted in Figure 1 . I will discuss first those for the random recruitment scenarios.

Figure 1

Results of the simulation experiment: estimates of per cent variance explained (PVE) for (A) random recruitment, and (B) climate-driven recruitment. Within each panel, results are given for three scenarios: 48 predictors over 23 years, five predictors over 23 years, and 48 predictors over 13 years. Within each scenario there are two plotted points for each of 20 simulated recruitment vectors, showing the PVE calculated with full (crosses) or partial (plus signs) cross-validation. The vertical line segments, and the associated numbers, indicate the median PVE for each type of cross-validation for that scenario. For clarity, all PVEs less than −200 (in panel A) or −100 (in panel B) are plotted at those values.

Figure 1

Results of the simulation experiment: estimates of per cent variance explained (PVE) for (A) random recruitment, and (B) climate-driven recruitment. Within each panel, results are given for three scenarios: 48 predictors over 23 years, five predictors over 23 years, and 48 predictors over 13 years. Within each scenario there are two plotted points for each of 20 simulated recruitment vectors, showing the PVE calculated with full (crosses) or partial (plus signs) cross-validation. The vertical line segments, and the associated numbers, indicate the median PVE for each type of cross-validation for that scenario. For clarity, all PVEs less than −200 (in panel A) or −100 (in panel B) are plotted at those values.

Random recruitment

For the initial scenario (48 predictors over 23 years) there was a dramatic difference between the PVE values calculated from full and partial cross-validations. With partial cross-validation, the PVE was always positive (median 48, range (17,71)); with full cross-validation it was much more variable and almost always negative (median −73, range (−147,29)). In other words, if we use partial cross-validation we will be misled into thinking there is a relationship between predictors and recruitment when, in fact, there is not. With full cross-validation we are unlikely to be misled.

The reason the partial PVEs are positively biased is that the predictor-screening step uses all the data. Therefore, the choice of the predictors we use in trying to predict the i (th) recruitment (in the i (th) iteration of the cross-validation) is not independent of that recruitment, which violates the principle of independence in the cross-validation.

The negative PVE values from full cross-validation show that, in these circumstances (no environment–recruitment relationship), it would actually be worse to use a regression estimator than to use the default estimator (though how much worse is not well estimated). This may seem surprising. However, all that it means is that the expected error from the regression estimator is greater than that from the default estimator (i.e. MSE regression > MSE default ). This is illustrated in Figure 2 , which shows that although the probability of the regression estimate being closer to the true value than the default estimate is close to 0.5, the former estimate is sometimes very wrong (e.g. in 1982 and 1993), leading to a much larger mean square error.

Figure 2

Comparison of the regression (r) and default (d) estimates calculated, in the full cross-validation procedure, for each year of one of the simulated vectors of random recruitment. For this simulated vector, MSE regression = 1.93, MSE default = 1.09, and PVE = −76.

Figure 2

Comparison of the regression (r) and default (d) estimates calculated, in the full cross-validation procedure, for each year of one of the simulated vectors of random recruitment. For this simulated vector, MSE regression = 1.93, MSE default = 1.09, and PVE = −76.

Reducing the number of predictors to five greatly reduced the difference between full and partial cross-validations. We may conclude that this difference comes mostly from the fact that our initial set of potential predictors was so large. With the restricted set of predictors we are not likely to conclude that we have much ability to predict recruitment (whichever type of cross-validation is used).

Reducing the number of years to 13 slightly increased the partial PVEs, so we would still be likely to believe (wrongly) in a regression estimator. However, it substantially reduced the full PVEs and made them more variable (median −160, range (−444,49)). Therefore, with fewer years' data, the effect of mistakenly using the regression estimator is worse, but even harder to estimate.

Climate-driven recruitment

For the climate-driven recruitment scenarios I first discuss the results for full cross-validation, then show how these change with partial cross-validation.

With full cross-validation it is possible to be confident, in the initial scenario (48 predictors over 23 years), that we will detect a climate–recruitment relationship (because all PVEs are positive). However, the ability to measure how strong that relationship is, is poor, because the PVE estimates ranged from 16 to 79 (with a median of 54). Note that all these values are less than the value of 80 built into the simulated recruitment vectors. This is not surprising. We could expect to achieve this theoretical value of 80 only if we had two pieces of information: which of our 48 predictors we should use in the regression, and what regression coefficient should be used for each. Our second scenario (five predictors over 23 years) shows that having the first of these two pieces of information gives only a slight change in median PVE (from 54 to 59). However, it substantially increases our ability to estimate how good our regression prediction would be, because the range of PVE estimates decreased from (16,79), with 48 predictors, to (38,81) with five predictors.

When the number of years in the data set is reduced to 13 the ability to detect the strong climate–recruitment relationship that exists in the simulated data is lost. For reliable detection we would need to have found that (almost) all the full PVEs for this scenario were positive. In fact, they varied widely, from −306 to 78, and only nine of 20 were positive. Therefore, with this short time-series, it was not possible to identify useful predictors and to estimate the associated regression coefficients sufficiently well to have any predictive power.

With partial cross-validation, the estimated PVEs were always higher. Again this is no surprise, being caused by the same bias detected in the random-recruitment simulations. The PVEs were also relatively insensitive to changes in the numbers of predictors or years. With 23 years of data, where it really was possible to detect the climate–recruitment relationship (according to the full PVEs), the strength of that relationship would likely be overestimated. With only 13 years' data – too few to detect the climate–recruitment relationship according to the full PVEs – we would wrongly think that we could detect it.

Discussion

The above simulation results illustrate the dangers of partial cross-validation (i.e. the failure to include predictor screening in the cross-validation process) in the evaluation of environment–recruitment relationships. This may lead to a belief that recruitment is driven by some of the environmental variables considered when this is not true, and this belief can produce recruitment estimates that are worse than those based on the default estimate (the mean of previous recruitments). When recruitment is related to the environmental variables, the strength of the relationship (and hence the accuracy of recruitment estimates) may be overestimated.

Another point to notice is the extent to which the ability to detect environment–recruitment relationships, and to measure their strength (i.e. estimate PVE), can depend on the length of the data set (i.e. the number of years covered). The typical length of such data sets seems to be close to that in the simulation example (23 years), where PVE was not well estimated. In a collection of 31 recently published environment–recruitment studies I found that the lengths ranged from 6 to 60 years, with quartiles at 16, 20, and 28 years.

Although the simulation results show that partial cross-validation is misleading, it is certainly better than no validation at all (which, as stated already, is not uncommon in the environment–recruitment literature). For the three random-recruitment scenarios, the median PVE estimates without cross-validation were 64, 13, and 73 (compared with 48, 4, and 53, respectively, from partial cross-validation). The corresponding values for the climate-driven recruitment scenarios were 79, 79, and 80 (compared with median partial PVEs of 75, 74, and 68, respectively).

A possible objection to full cross-validation is that different predictors may (and often will) be selected at each iteration of the procedure. However, this should be of no concern because it is quite likely that, were the full data set to have covered a different set of years, stepwise regression would have selected a different set of predictors. This is particularly likely when (as is common in environment–predictor data sets) there are substantial correlations within the set of candidate predictors. In such a case, there are likely to be several different sets of predictors that provide similar predictive power. In other words, it will often be true that there is no single set of predictors that is clearly superior to all others in all years. It should also be borne in mind that, even if the same set of predictors were used at each iteration of the cross-validation, the calculated regression coefficients would differ from iteration to iteration.

Full cross-validation requires two things that are not always present (or at least reported) in environment–recruitment studies. The first is a clear statement of the membership of the initial (pre-screening) set of potential predictors. In a review, Myers (1998) noted that this is not always done (see his point 3, p. 297). The second is that the screening procedure must be objective, so that it can be automated in a cross-validation. Not all existing screening procedures would be easy to automate (e.g. Daskalov, 1999 ). It is important here to distinguish between screening that uses the recruitment data (data-dependent) and screening that does not (data-independent); only the former matters. For example, one of the screening criteria listed by Svendsen et al . (1995, p. 646) is that the results be “reasonable according to common oceanographical and biological knowledge”. That study considered a wide range of environmental predictors applied to recruitment time-series for six North Sea species. For each species, knowledge of geographical distribution and life history allowed the authors to rule out some predictors as not reasonable, (presumably) without using any information in the recruitment data set. This is what is here meant by data-independent screening. The proper approach in situations like this is to screen predictors in two stages: first data-independent, then data-dependent. Only the latter stage needs to be included in the cross-validation.

Although the simulations given above used just one type of estimator (regression) and one type of validation (repeated leave-one-out cross-validation), I see no reason why the conclusions should not apply equally to other approaches. This would include estimators based on environmentally dependent stock/recruit relationships (e.g. Zebdi and Collie, 1995 ; Williams and Quinn, 2000 ; Köster et al ., 2001 ; Majormäki, 2004 ) or generalized additive models (GAMs; e.g. Daskalov, 1999 ). Other types of validation include recent-year validation ( Svendsen et al ., 1995 ; Williams and Quinn, 2000 ) and the simulation method of Dippner (1997) . Recent-year validation has the merit of better mimicking the real-world construct-then-apply sequence (see Introduction) than does the usual cross-validation, in that the years for which recruitments are predicted are always outside the range of years used to construct an estimator. However, it usually has the substantial disadvantage of small sample size. Both examples cited here for this method applied it to just 3 years, which means that any estimate of PVE (or similar statistic) from the method would be very imprecise. Even with 23 years of data, estimates of PVE in the present study were not very precise. The simulation approach of Dippner (1997) is complementary to that used here in that its aim is to test the null hypothesis of no environment–recruitment relationship, rather than to measure the strength of the relationship.

The effect of autocorrelation, which can induce spurious correlations between time-series ( Williams and Quinn, 2000 ), could easily be explored in simulations such as those presented above. It was not done here because autocorrelation in the hoki recruitment vector was small (0.24) and not statistically significant. Dippner (1997) included autocorrelation in his simulation approach to validation.

Finally, there is a potential danger in the use of multiple predictands for the same fish stock. For example, Megrey et al . (1995) considered six alternative predictands (recruitment indices at ages 0, 1, and 2, and their logarithms) and Quinn and Niebauer (1995) used three (recruitment, log-recruitment, and spawner–recruit residuals). This is problematic when the idea is to seek the strongest relationship that can be found between environmental variables and any one of these predictands. The more predictands there are, the more likely it is that a chance correlation with one or more of the potential predictors will be found. Cross-validation provides a means of avoiding chance correlations caused by having many potential predictors, but it does not protect against the same problem arising from many predictands. My recommendation is, where possible, to avoid multiple predictands. The choice between recruitment indices at different ages could be made on the basis of precision (choose the age for which the estimated indices are most precise). As a general rule, log-recruitment is preferable to recruitment as a predictand because it is likely to be more normally distributed. Where there is a clear spawner–recruit relationship, it does not make sense to try to ignore the effect of spawning-stock size and to try to estimate recruitment from environmental variables alone.

I appreciate the valuable comments of three reviewers on an earlier version of this paper.

References

Beentjes
M.P.
Renwick
J.A.
The relationship between red cod, Pseudophycis bachus , recruitment and environmental variables in New Zealand
Environmental Biology of Fishes
 , 
2001
, vol. 
61
 (pg. 
315
-
328
)
Bull
B.
Livingston
M.E.
Links between climate variation and year class strength of New Zealand hoki ( Macruronus novaezelandiae ): an update
New Zealand Journal of Marine and Freshwater Research
 , 
2001
, vol. 
35
 (pg. 
871
-
880
)
Daskalov
G.
Relating fish recruitment to stock biomass and physical environment in the Black Sea using generalized additive models
Fisheries Research
 , 
1999
, vol. 
41
 (pg. 
1
-
23
)
Dippner
J.W.
Recruitment success of different fish stocks in the North Sea in relation to climate variability
Deutsche Hydrographische Zeitschrift
 , 
1997
, vol. 
49
 (pg. 
277
-
293
)
Francis
R. I. C. C.
Bradford-Grieve
J. M.
Hadfield
M. G.
Renwick
J. A.
Sutton
P. J. H.
Environmental predictors of hoki year-class strengths: an update
2005
 
New Zealand Fisheries Assessment Report 2005/58. 22 pp
Kidson
J.W.
An analysis of New Zealand synoptic types and their use in defining weather regimes
International Journal of Climatology
 , 
2000
, vol. 
20
 (pg. 
299
-
316
)
Köster
F.W.
Hinrichsen
H.
St John
M.A.
Schnack
D.
MacKenzie
B.R.
Tomkiewicz
J.
Plikshs
M.
Developing Baltic cod recruitment models. 2. Incorporation of environmental variability and species interaction
Canadian Journal of Fisheries and Aquatic Sciences
 , 
2001
, vol. 
58
 (pg. 
1534
-
1556
)
Majormäki
T.J.
Analysis of the spawning stock–recruitment relationship of vendace ( Coregonus albula (L.)) with evaluation of alternative models, additional variables, biases and errors
Ecology of Freshwater Fish
 , 
2004
, vol. 
13
 (pg. 
46
-
60
)
Mallows
C.L.
Some comments on C p
Technometrics
 , 
1973
, vol. 
15
 (pg. 
661
-
675
)
Megrey
B.A.
Bograd
S.J.
Rugen
W.C.
Hollowed
A.B.
Stabeno
P.J.
Macklin
S.A.
Schumacher
J.D.
Ingraham
W.J.
An exploratory analysis of associations between biotic and abiotic factors and year-class strength of Gulf of Alaska walleye pollock ( Theragra chalcogramma )
Canadian Special Publication of Fisheries and Aquatic Sciences
 , 
1995
, vol. 
121
 (pg. 
227
-
243
)
Mosteller
F.
Tukey
J.W.
Data Analysis and Regression
 , 
1977
Reading, Massachusetts
Addison-Wesley
 
588 pp
Myers
R.A.
When do environment–recruitment correlations work?
Reviews in Fish Biology and Fisheries
 , 
1998
, vol. 
8
 (pg. 
283
-
305
)
Quinn
T.J.
Niebauer
H.J.
Relation of eastern Bering Sea walleye pollock ( Theragra chalcogramma ) recruitment to environmental and oceanographic variables
Canadian Special Publication of Fisheries and Aquatic Sciences
 , 
1995
, vol. 
121
 (pg. 
497
-
507
)
Svendsen
E.
Aglen
A.
Iversen
S.A.
Skagen
D.W.
Smestad
O.
Influence of climate on recruitment and migration of fish stocks in the North Sea
Canadian Special Publication of Fisheries and Aquatic Sciences
 , 
1995
, vol. 
121
 (pg. 
577
-
592
)
Ueta
Y.
Tokai
T.
Segawa
S.
Relationship between year-class abundance of the oval squid Sepioteuthis lessoniana and environmental factors off Tokushima Prefecture, Japan
Fisheries Science
 , 
1999
, vol. 
65
 (pg. 
424
-
431
)
Williams
E.H.
Quinn
T.J.
Pacific herring, Clupea pallasi , recruitment in the Bering Sea and north-east Pacific Ocean. 2. Relationships to environmental variables and implications for forecasting
Fisheries Oceanography
 , 
2000
, vol. 
9
 (pg. 
300
-
315
)
Zebdi
A.
Collie
J.S.
Effect of climate on herring ( Clupea pallasi ) population dynamics in the Northeast Pacific Ocean
Canadian Special Publication of Fisheries and Aquatic Sciences
 , 
1995
, vol. 
121
 (pg. 
277
-
290
)