A debate continues on the relative importance of quality and quantity in scientific publication. Recent published correspondence (Fischer et al. 2012) heralds a movement to reemphasize quality research over quantity. Although we certainly agree with the call for quality, decrying quantity likewise poses a trade-off that may ultimately be undesirable for fostering an impactful body of research and advancing science. Instead, we argue for an integrated view of scientific contributions that incorporates elements of both quality and quantity. We describe this view as influence.

Quality refers to the standard of something as measured against something similar. In a research context, this is inherently problematic, because it is challenging to make such subjective comparisons. For example, is a single paper published in a “top-tier” high-impact journal, which is consequently likely to be broadly read and cited, a more valuable contribution to a research field than two or more papers published in “lower-tier” journals (Loyola et al. 2012)? Quantity is more straightforward to define, because it refers to the number of publications generated by an individual researcher or a research group. However, simply counting the number of publications fails to provide an indication of the quality of the work. Quality is nebulous, whereas quantity is more tractable, but neither attribute alone provides an adequate assessment of the full value of a scientific contribution.

Influence takes the view that quality and quantity need not be mutually exclusive. Influence incorporates both of these characteristics and reflects the fact that science advances through the accumulation of novel and relevant knowledge to a field of study. In this context, influence can be defined as the capacity to produce an effect on the advancement of scientific knowledge. This definition suggests that influence can manifest through a single revolutionary paper or a series of papers that cumulatively advance a research topic. Furthermore and of equal importance, influence can extend beyond the body of research itself if the work leads to solving a particular problem, such as the recovery of an endangered wildlife population. Influence therefore applies to thinking about the practical application of a piece of work (e.g., the science–policy interface, product development and commercialization); this is relevant to applied work performed on short time scales and to discovery-based fundamental research over ­longer ones.

Influential research requires reflection and creative thinking, elements essential to all quality research. However, developing a body of research is also favorable, because a series of steps may be required to advance scientific knowledge on a challenging topic. Therefore, the demand falls on the researcher to generate quality ideas and research but to do so in a timely manner in an effort to produce a body (quantity) of research. This is the dilemma faced by researchers in the electronic age. How can we be expected to produce both quality and quantity and to yield influential research? We simply cannot—at least not on our own. Instead, we must rely on networking and collaborations (Donaldson 2013) to build our research programs and to remain influential in our fields in order to advance scientific knowledge (Leimu and Koricheva 2005, Figg et al. 2006). With this collaborative view in mind, scientific influence involves the body of work of both individual researchers and of research groups as a whole.

Although influence can take many forms, it can still be measured and ranked using traditional bibliometric indexes. For example, the h-index represents the largest number, h, for which h publications have at least h citations. Another measure, the i10-index, is the number of papers that have been cited at least 10 times. These indexes can be used to assess influence either throughout a researcher's career, or, given the possibility for the h-index to be inflated for late-career researchers who have had a longer period for their work to accumulate citations, both indexes can be evaluated over shorter time scales (e.g., 5 or 10 years). These indexes represent viable metrics for evaluating influence because they incorporate elements of both quality (the citation rate) and quantity (the number of papers published), particularly when they are evaluated together. Criticisms of these bibliometric indexes include that they do not factor in coauthorship or self-citations. However, coauthorships and self-citations provide an indirect indication of the influence of a research group as a whole and, provided that all coauthorships are justified and the self-citations are relevant to the published work, need not be considered a detriment. Alternatively, ranking indexes could be broken down in greater depth, such as presenting the h-index for first-author papers only or presenting the indexes with or without self-citations included. This approach would enable relative assessments of individual researcher or research group productivity based on their body of work. Regardless, caution must be applied when developing metrics for evaluating researchers. For example, a journal's impact factor, which indicates the average number of citations to the journal, is a poor indicator of the contributions of individual researchers, the subject of much interest of late through the San Francisco Declaration on Research Assessment (http://am.ascb.org/dora).

Although indexes can be used to measure researcher and research group influence, there are additional means that should be considered. In applied realms such as conservation science, one might argue that traditional bibliometrics are irrelevant and that the true measure of influence is the extent to which the work leads to conservation-relevant outcomes (Schäfer et al. 2011). The most ardent conservation scientists would go one step further and suggest that all that really matters is whether the science led to reversals in the decline of imperiled populations. In evidence-based conservation, no individual paper should or will ever be the basis for a major policy change. Rather, the body of literature should be evaluated as a whole (Sutherland et al. 2004).

Evaluating a body of literature may be accomplished more easily now that many publishers track and rank the number of article ­downloads and views and article sharing among colleagues on their Web sites. Furthermore, influence could be evaluated by the extent to which a paper generates media attention or the number of Twitter retweets or Facebook likes. Indeed, there are now tools that track social media sites, conventional media, government ­documents, and other sources for ­mentions of scholarly articles to generate an article-level metric (see www.altmetric.com; Piwowar 2013). Publishers can integrate this type of information directly into their articles’ Web pages (see https://peerj.com). Some publishers allow readers to comment on articles, providing an additional indication of how readers engage with the research (for good or ill). These alternatives to traditional bibliometrics thus provide information on how research affects discourse and behavior, which may help identify influential researchers and research groups and provide some predictive power for identifying emerging research trends and influence. Such examples of assessing influence are inherently subjective but could be considered in combination with the aforementioned metrics in a holistic assessment of the influence of a researcher or research group.

Evaluating the contributions of researchers and research groups through influence, rather than focusing exclusively on either quality or quantity, which are not easily decoupled, ­represents a step forward in equity. Formal evaluation processes, such as hiring, tenure, and funding, could ­benefit from reflection on the many ways in which scientific output can be influential (Cooke 2011, Fortin and Currie 2013). Evaluating influence should include both citation metrics (e.g., the h-index) and broader measures, such as those of Altmetrics or the ability for a paper to help solve applied problems. Assessing influence in the context of the larger body of work produced by researchers or research groups over a period of time has the potential to provide a tractable means of evaluating their contributions to their field.

References cited

Cooke
SJ.
On the basic–applied continuum in ecology and evolution and a call to action: Perspectives of an early career researcher in academia
Ideas in Ecology and Evolution
 , 
2011
, vol. 
4
 (pg. 
37
-
39
)
Donaldson
MR.
“Speed collaborations” and the quality versus quantity debate in ecology and evolution publications
Ideas in Ecology and Evolution
 , 
2013
, vol. 
6
 (pg. 
47
-
48
doi:10.4033/iee.2013.6.10.c
Figg
WD
Dunn
L
Liewehr
DJ
Steinberg
SM
Thurman
PW
Barrett
JC
Birkinshaw
J
Scientific collaboration results in higher citation rates of published articles
Pharmacotherapy
 , 
2006
, vol. 
26
 (pg. 
759
-
767
)
Fischer
J
Ritchie
EG
Hanspach
J
Academia's obsession with quantity
Trends in Ecology and Evolution
 , 
2012
, vol. 
27
 (pg. 
473
-
474
)
Fortin
J-M
Currie
DJ
Big science vs. little science: How scientific impact scales with funding
PLOS ONE
 , 
2013
, vol. 
8
  
(art. e65263) doi:10.1371/journal.pone.0065263
Leimu
R
Koricheva
J
What determines the citation frequency of ecological papers?
Trends in Ecology and Evolution
 , 
2005
, vol. 
20
 (pg. 
28
-
32
)
Loyola
RD
Diniz-Filho
JAF
Bini
LM
Obsession with quantity: A view from the south
Trends in Ecology and Evolution
 , 
2012
, vol. 
27
 pg. 
585
 
Piwowar
H.
Altmetrics: Value all research products
Nature
 , 
2013
, vol. 
493
 pg. 
159
 
Schäfer
RB
Cooke
SJ
Arlinghaus
R
Bonada
N
Brischoux
F
Casper
AF
Catford
JA
Rolland
V
Perspectives from early career researchers on the publication process in ecology: A response to Statzner and Resh (2010)
Freshwater Biology
 , 
2011
, vol. 
56
 (pg. 
2405
-
2412
)
Sutherland
WJ
Pullin
AS
Dolman
PM
Knight
TM
The need for evidence-based conservation
Trends in Ecology and Evolution
 , 
2004
, vol. 
19
 (pg. 
305
-
308
)