Presenting Results

Authors face the significant challenge of presenting their results in the Journal of Pediatric Psychology (JPP) completely, yet succinctly and writing a convincing discussion section that highlights the importance of their research. The third and final in a series of editorials (Drotar, 2009a,b), this article provides guidance for authors to prepare effective results and discussion sections. Authors also should review the JPP website (http://www.jpepsy.oxfordjournals.org/) and consider other relevant sources (American Psychological Association, 2001; APA Publications and Communications Board Working Group on Journal Reporting Standards, 2008; Bem, 2004; Brown, 2003; Wilkinson & The Task Force on Statistical Inference, 1999).

Follow APA and JPP Standards for Presentation of Data and Statistical Analysis

Authors’ presentations of data and statistical analyses should be consistent with publication manual guidelines (American Psychological Association, 2001). For example, authors should present the sample sizes, means, and standard deviations for all dependent measures and the direction, magnitude, degrees of freedom, and exact p levels for inferential statistics. In addition, JPP editorial policy requires that authors include effect sizes and confidence intervals for major findings (Cumming & Finch, 2005, 2008; Durlak, 2009; Wilkinson & the Task Force on Statistical Inference, 1999; Vacha-Haase & Thompson, 2004).

Authors should follow the Consolidated Standards of Reporting Trials (CONSORT) when reporting the results of randomized clinical trials (RCTs) in JPP (Moher, Schultz, & Altman, 2001; Stinson-McGrath, & Yamoda, 2003). Guidelines have also been developed for nonrandomized designs, referred to as the Transparent Reporting of Evaluations with Nonrandomized Designs (TREND) statement (Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004) (available from http://www.trend-statement.org/asp/statement.asp). Finally, studies of diagnostic accuracy, including sensitivity and specificity of tests, should be reported in accord with the Standards for Reporting of Diagnostic Accuracy (STARD) (Bossuyt et al., 2003) (http://www.annals.org/cgi/content/full/138/1/W1).

Finally, authors may also wish to consult a recent publication (APA Publications and Communications Board Working Group on Journal Reporting Standards, 2008) that contains useful guidelines for various types of manuscripts including reports of new data collection and meta-analyses. Guidance is also available for manuscripts that contain observational longitudinal research (Tooth, Ware, Bain, Purdie, & Dobson, 2005) and qualitative studies involving interviews and focus groups (Tong, Sainsbury, & Craig, 2007).

Provide an Overview and Focus Results on Primary Study Questions and Hypotheses

Readers and reviewers often have difficulty following authors’ presentation of their results, especially for complex data analyses. For this reason, it is helpful for authors to provide an overview of the primary sections of their results and also to take readers through their findings in a step-by-step fashion. This overview should follow directly from the data analysis plan stated in the method (Drotar, 2009b).

Readers appreciate the clarity of results that are consistent with and focused on the major questions and/or specific hypotheses that have been described in the introduction. Readers and reviewers should be able to identify which specific hypotheses were supported, which received partial support, and which were not supported. Nonsignificant findings should not be ignored. Hypothesis-driven analyses should be presented first, prior to secondary analyses and/or more exploratory analyses (Bem, 2004). The rationale for the choice of statistics and for relevant decisions within specific analyses should be described (e.g., rationale for the order of entry of multiple variables in a regression analysis).

Report Data that is Relevant to Statistical Assumptions

Authors should provide appropriate evidence, including quantitative results where necessary, to affirm that their data fit the assumptions required by the statistical analyses that are reported. When assumptions underlying statistical tests are violated, authors may use transformations of data and/or alternative statistical methods in such situations and should describe the rationale for them.

Integrate the Text of Results with Tables and/or Figures

Tables and figures provide effective, reader-friendly ways to highlight key findings (Wallgren, Wallgren, Perrson, Jorner, & Haaland, 1996). However, authors face the challenge of describing their results in the text in a way that is not highly redundant with information presented in tables and/or figures. Figures are especially useful to report the results of complex statistics such as structural equation modeling and path analyses that describe interrelationships among multiple variables and constructs. Given constraints on published text in JPP, tables and figures should always be used selectively and strategically.

Describe Missing Data

Reviewers are very interested in understanding the nature and impact of missing data. For this reason, information concerning the total number of participants and the flow of participants through each stage of the study (e.g., in prospective studies), the frequency and/or percentages of missing data at different time points, and analytic methods used to address missing data is important to include. A summary of cases that are missing from analyses of primary and secondary outcomes for each group, the nature of missing data (e.g., missing at random or missing not at random), and, if applicable, statistical methods used to replace missing data, and/or understand the impact of missing data (Schafer & Graham, 2002) are useful for readers.

Consider Statistical Analyses that Document Clinical Significance of Results

Improving the clinical significance of research findings remains an important but elusive goal for the field of pediatric psychology (Drotar & Lemanek, 2001). Reviewers and readers are very interested in the question: what do the findings mean for clinical care? For this reason, I strongly encourage authors to conduct statistical evaluations of the clinical significance of their results whenever it is applicable and feasible. In order to describe and document clinical significance, authors are strongly encouraged to use one of several recommended approaches including (but not limited to) the Reliable Change Index (Jacobson, Roberts, Burns, & McGlinchey, 1999; Jacobson & Truax, 1991; Ogles, Lambert, & Sawyer, 1995), normative comparisons (Kendall, Marrs-Garcia, Nath, & Sheldrick, 1999); or analyses of the functional impact of change (Kazdin, 1999, 2000). Statistical analyses of the cost effectiveness of interventions can also add to clinical significance (Gold, Russell, Siegel, & Weinstein, 1996). Authors who report data from quality of life measures should consider analyses of responsiveness and clinical significance that are appropriate for such measures (Revicki, Hays, Cella, & Sloan, 2008; Wywrich et al., 2005).

Include Supplementary Information Concerning Tables, Figures, and Other Relevant Data on the JPP Website

The managing editors of JPP appreciate the increasing challenges that authors face in presenting the results of complicated study designs and data analytic procedures within the constraints of JPP policy for manuscript length. For this reason, our managing editors will work with authors to determine which tables, analyses, and figures are absolutely essential to be included in the printed text version of the article versus those that are less critical but nonetheless of interest and can be posted on the JPP website in order to save text space. Specific guidelines for submitting supplementary material are available on the JPP website. We believe that increased use of the website to post supplementary data will not only save text space but will facilitate communication among scientists that is so important to our field and encouraged by the National Institutes of Health.

Writing the Discussion Section

The purpose of the discussion is to give readers specific guidance about what was accomplished in the study, the scientific significance, and what research needs to be done next.

The discussion section is very important to readers but extremely challenging for authors, given the need for a focused synthesis and interpretation of findings and presentation of relevant take-home messages that highlight the significance and implications of their research.

Organize and Focus the Discussion

Authors are encouraged to ensure that their discussion section is consistent with and integrated with all previous sections of their manuscripts. In crafting their discussion, authors may wish to review their introduction to make sure that the points that are most relevant to their study aims, framework, and hypotheses that have been previously articulated are identified and elaborated.

A discussion section is typically organized around several key components presented in a logical sequence including synthesis and interpretation of findings, description of study limitations, and implications, including recommendations for future research and clinical care. Moreover, in order to maximize the impact of the discussion, it is helpful to discuss the most important or significant findings first followed by secondary findings.

One of the most common mistakes that authors make is to discuss each and every finding (Bem, 2004). This strategy can result in an uninteresting and unwieldy presentation. A highly focused, lively presentation that calls the reader's attention to the most salient and interesting findings is most effective (Bem, 2004). A related problematic strategy is to repeat findings in the discussion that have already been presented without interpreting or synthesizing them. This adds length to the manuscript, reduces reader interest, and detracts from the significance of the research. Finally, it is also problematic to introduce new findings in the discussion that have not been described in the results.

Describe the Novel Contribution of Findings Relative to Previous Research

Readers and reviewers need to receive specific guidance from authors in order to identify and appreciate the most important new scientific contribution of the theory, methods, and/or findings of their research (Drotar, 2008; Sternberg & Gordeva, 2006). Readers need to understand how authors’ primary and secondary findings fit with what is already known as well as challenge and/or extend scientific knowledge. For example, how do the findings shed light on important theoretical or empirical issues and resolve controversies in the field? How do the findings extend knowledge of methods and theory? What is the most important new scientific contribution of the work (Sternberg & Gordeva, 2006)? What are the most important implications for clinical care and policy?

Discuss Study Limitations and Relevant Implications

Authors can engage their readers most effectively with a balanced presentation that emphasizes the strengths yet also critically evaluates the limitations of their research. Every study has limitations that readers need to consider in interpreting their findings. For this reason, it is advantageous for authors to address the major limitations of their research and their implications rather than leaving it to readers or reviewers to identify them. An open discussion of study limitations is not only critical to scientific integrity (Drotar, 2008) but is an effective strategy for authors: reviewers may assume that if authors do not identify key limitations of their studies they are not aware of them.

Description of study limitations should address specific implications for the validity of the inferences and conclusions that can be drawn from the findings (Campbell & Stanley, 1963). Commonly identified threats to internal validity include issues related to study design, measurement, and statistical power. Most relevant threats to external validity include sample bias and specific characteristics of the sample that limit generalization of findings (Drotar, 2009b).

Although authors’ disclosure of relevant study limitations is important, it should be selective and focus on the most salient limitations, (i.e., those that pose the greatest threats to internal or external validity). If applicable, authors may also wish to present counterarguments that temper the primary threats to validity they discuss. For example, if a study was limited by a small sample but nonetheless demonstrated statistically significant findings with a robust effect size, this should be considered by reviewers.

Study limitations often suggest important new research agendas that can shape the next generation of research. For this reason, it is also very helpful for authors to inform reviewers about the limitations of their research that should be addressed in future studies and specific recommendations to accomplish this.

Describe Implications of Findings for New Research

One of the most important features of a discussion section is the clear articulation of the implications of study findings for research that extends the scientific knowledge base of the field of pediatric psychology. Research findings can have several kinds of implications, such as the development of theory, methods, study designs data analytic approaches, or identification of understudied and important content areas that require new research (Drotar, 2008). Providing a specific agenda for future research based on the current findings is much more helpful than general suggestions. Reviewers also appreciate being informed about how specific research recommendations can advance the field.

Describe Implications of Findings for Clinical Care and/or Policy

I encourage authors to describe the potential clinical implications of their research and/or suggestions to improve the clinical relevance of future research (Drotar & Lemanek, 2001). Research findings may have widely varied clinical implications. For example, studies that develop a new measure or test an intervention have greater potential clinical application than a descriptive study that is not directly focused on a clinical application. Nevertheless, descriptive research such as identification of factors that predict clinically relevant outcomes may have implications for targeting clinical assessment or interventions concerning such outcomes (Drotar, 2006). However, authors be careful not to overstate the implications of descriptive research.

As is the case with recommendations for future research, the recommendations for clinical care should be as specific as possible. For example, in measure development studies it may be useful to inform readers about next steps in research are needed to enhance the clinical application of a measure.

This is the final in the series of editorials that are intended to be helpful to authors and reviewers and improve the quality of the science in the field of pediatric psychology. I encourage your submissions to JPP and welcome our collective opportunity to advance scientific knowledge.

Acknowledgments

The hard work of Meggie Bonner in typing this manuscript and the helpful critique of the associate editors of Journal of Pediatric Psychology and Rick Ittenbach are gratefully acknowledged.

Conflict of interest: None declared.

References

American Psychological Association
Publication manual of the American Psychological Association.
 , 
2001
5th
Washington, DC
Author
APA Publications and Communications Board Working Group on Journal Article Reporting Standards
Reporting standards for research in psychology. Why do we need them? What do they need to be?
American Psychologist
 , 
2008
, vol. 
63
 (pg. 
839
-
851
)
Bem
D
Darley
JM
Zanna
MP
Roediger
H.
III
Writing the empirical journal article
The complete academic: a career guide. Pediatric psychology.
 , 
2004
2nd
Washington, DC
America Psychological Association
(pg. 
105
-
219
)
Bossuyt
P
Reitsma
JB
Bruns
DE
Gatsonsis
CA
Glasziou
PP
Irwig
LM
, et al.  . 
The STARD statement for reporting studies of diagnostic accuracy: Explanation and elaboration
Annals of Internal Medicine
 , 
2003
, vol. 
138
 (pg. 
W1
-
W12
)
Campbell
DJ
Stanley
JL
Experimental and quasi experimental designs for research.
 , 
1963
Chicago
Rand McNally
Cumming
G
Finch
S
Inference by eye: Confidence intervals and how to read pictures of data
American Psychologist
 , 
2005
, vol. 
60
 (pg. 
170
-
180
)
Cumming
G
Finch
S
Putting research in context: Understanding confidence intervals from one or more studies.
Journal of Pediatric Psychology.
 , 
2008
Advance Access published December 18, 2008 
doi:10.1093/jpepsy/jsn118
Des Jarlais
DC
Lyles
C
Crepaz
N
the TREND Group
Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND Statement.
American Journal of Public Health, 94
 , 
2004
(pg. 
361
-
366
Retrieved September 15, 2004, from http://www.trend-statement.org/asp/statement.asp
Drotar
D
Drotar
D
Writing research articles for publication
Handbook of research methods in clinical child and pediatric psychology. Pediatric psychology.
 , 
2000
New York
Kluwer Academic/Plenum Publishers
(pg. 
347
-
374
)
Drotar
D
Psychological interventions in childhood chronic illness.
 , 
2006
Washington, D.C.
American Psychological Association
Drotar
D
Thoughts on establishing research significance and presenting scientific integrity
Journal of Pediatric Psychology
 , 
2008
, vol. 
33
 (pg. 
1
-
3
)
Drotar
D
Editorial: Thoughts on improving the quality of manuscripts submitted to the Journal of Pediatric Psychology: Writing a convincing introduction
Journal of Pediatric Psychology
 , 
2009
, vol. 
34
 (pg. 
1
-
3
)
Drotar
D
Editorial: How to report methods in the Journal of Pediatric Psychology
Journal of Pediatric Psychology
 , 
2009
 
Advance Access published February 10; doi:10.1093/jpepsy/jsp002
Drotar
D
Lemanek
K
Steps toward a clinically relevant science of interventions in pediatric settings
Journal of Pediatric Psychology
 , 
2001
, vol. 
26
 (pg. 
385
-
394
)
Durlak
JA
How to select, calculate, and interpret effect sizes.
Journal of Pediatric Psychology.
 , 
2009
Advance Access published February 16 
doi:10.1093/jpepsy/jsp004
Gold
MR
Siegel
JE
Russell
LB
Weinstein
MC
Cost-effectiveness in health and medicine.
 , 
1996
New York
Oxford University Press
 
. Advance Access published February 16, 2009, doi:10.1093/jpepsy/jsp004
Jacobson
NS
Roberts
LJ
Berns
SB
McGlinchey
B
Methods for defining and determining clinical significance of treatment effects: Description, application, and alternatives
Journal of Consulting and Clinical Psychology
 , 
1999
, vol. 
67
 (pg. 
300
-
307
)
Jacobson
NS
Truax
P
Clinical significance: A statistical approach to defining meaningful change in psychotherapy research
Journal of Consulting and Clinical Psychology
 , 
1991
, vol. 
59
 (pg. 
12
-
19
)
Kazdin
AE
The meanings and measurement of clinical significance
Journal of Consulting and Clinical Psychology
 , 
1999
, vol. 
67
 (pg. 
332
-
339
)
Kazdin
AE
Psychotherapy for children and adolescents: Directions for research and practice.
 , 
2000
New York
Oxford University Press
Kendall
PC
Marrs-Garcia
A
Nath
SR
Sheldrick
RC
Normative comparisons for the evaluation of clinical significance
Journal of Consulting and Clinical Psychology
 , 
1999
, vol. 
67
 (pg. 
285
-
299
)
Moher
N
Schultz
KF
Altman
D
The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials
Journal of the American Medical Association
 , 
2001
, vol. 
285
 (pg. 
1987
-
1991
)
Ogles
BM
Lambert
ML
Sawyer
JD
Clinical significance of the National Institute of Mental Health Treatment of Depression Collaborative Research Program data
Journal of Consulting and Clinical Psychology
 , 
1995
, vol. 
63
 (pg. 
321
-
326
)
Revicki
D
Hays
RD
Cella
D
Sloan
J
Recommended methods for determining responsiveness and minimally important differences for patient-reported outcomes
Journal of Clinical Epidemiology
 , 
2008
, vol. 
61
 (pg. 
102
-
109
)
Schafer
JL
Graham
JW
Missing data: Our view of the state of the art
Psychological Methods
 , 
2002
, vol. 
7
 (pg. 
147
-
177
)
Sternberg
RJ
Gordeva
T
The anatomy of impact. What makes an article influential?
Psychological Science
 , 
2006
, vol. 
7
 (pg. 
69
-
75
)
Stinson
JN
McGrath
PJ
Yamada
JT
Clinical trials in the Journal of Pediatric Psychology: Applying the CONSORT statement
Journal of Pediatric Psychology
 , 
2003
, vol. 
28
 (pg. 
159
-
167
)
Tong
A
Sainsbury
P
Craig
J
Consolidated criteria for reporting qualitative research (COREQ): a 32-item checklist for interviews and focus groups
International Journal of Quality in Health Care
 , 
2007
, vol. 
19
 (pg. 
349
-
357
)
Tooth
L
Ware
R
Bain
C
Purdie
DM
Dobson
A
Quality of reporting on observational longitudinal research
American Journal of Epidemiology
 , 
2005
, vol. 
161
 (pg. 
280
-
288
)
Vacha-Haase
T
Thompson
B
How to estimate and interpret various effect sizes
Journal of Counseling Psychology
 , 
2004
, vol. 
51
 (pg. 
473
-
481
)
Wallgren
A
Wallgren
B
Persson
R
Jorner
V
Haaland
FA
Graphing statistics and data. Creating better charts.
 , 
1996
Thousand Oaks, CA
Sage
Wilkinson
L
the Task Force on Statistical Inference.
Statistical methods in psychology journals
American Psychologist
 , 
1999
, vol. 
54
 (pg. 
594
-
604
)
Wywrich
KW
Bullinger
M
Aaronson
N
Hays
RD
Patrick
DL
Symonds
T
The Clinical Significance Consensus Meeting Group
Estimating clinically significant differences in quality of life outcomes
Quality of Life Research
 , 
2005
, vol. 
14
 (pg. 
285
-
295
)