INTRODUCTION

Peer review is used worldwide as a tool to assess and improve the quality of submissions to biomedical journals. Peer review is the solid foundation on which our scientific journals are based and is the key feature that sets our journals apart from the host of other ways that information and opinions are published both in paper and online in the modern era. It is also the major feature that ensures the trust of both clinicians and the public in the integrity and value of a particular published article. Thus, there is virtually no one who does not need to know something about this process. If you are a practicing clinician, you need to have trust in this process; if you are a submitting author, you need to know how your paper is going to be assessed; if you are a patient looking for information, you need to know what sets peer-reviewed articles apart from articles that you might read on the Internet; and for people performing reviews, it is vital to have a structured framework and knowledge of what is required of you. Interestingly, we may be all of the above types of people at some time in our careers, so this article, while primarily providing help for people submitting a review, will also help you to understand the strengths, weaknesses and controversies in peer review, including a few facts that might surprise you and links to several resources that will certainly help you.

I HAVE JUST GOT AN E-MAIL ASKING ME TO REVIEW A PAPER FOR THE EJCTS/ICVTS. WHAT DO I DO NOW?

Firstly, congratulations. You should be proud of yourself. All reviewers are personally selected by the Associate Editors or the Editor-in-Chief. While the e-mail you receive looks automated, we have chosen you ourselves. We keep a list of people who submit to the EJCTS and if you have three articles accepted to our journal then you will be promoted to a member of the reviewer team. It is mainly from this reviewer team that we select our reviewers. There are other ways to be selected, and often an Editor may know or have heard of your work personally, or seen that you have written a particularly good paper on the subject and therefore have selected you that way.

You will never have been selected automatically by a computer and we only select two reviewers per paper. In our invitation e-mail we will prompt you to either accept or decline the review by following a link to our website. If you decline or do not reply, we will ask somebody else to act as a reviewer in your place.

HOW DOES BEING A REVIEWER HELP ME?

You may not be sure if you want to act as a reviewer. After all, you may have heard that the EJCTS rejects ∼60–70% of articles, and as our reviews are anonymous, no one will even know you have done it! Here are three reasons to encourage you to act as a reviewer.

Firstly, all clinicians who submit papers for publication must have an interest in the health of the review process. If you expect high-quality reviews for your publications then we would hope that you would be interested in providing high-quality reviews for others. With two reviewers per paper, we actually need you to review two papers for every paper you submit over the course of your career!

Secondly, it is a great way to be the first to read a paper, and critically appraising other clinicians' papers is a good way to learn how to write better papers yourself.

Thirdly and finally, we would like to encourage recognition of the work of reviewers.

In the UK, the House of Commons Science and Technology Committee published a report on the system of peer review in scientific journals in July this year [1]. They said ‘we encourage greater recognition of the work carried out by reviewers, by both publishers and employers. All publishers need to have in place systems for recording and acknowledging the contribution of those involved in peer review’. At the EJCTS, we realize the great contribution made by you. Therefore, in your personal information area on the submission system (http://submit-ejcts.ctsnetjournals.org/), you can generate a Journal Activity Certificate to present to your institution as evidence of this unpaid work that you do. We also thank all our reviewers annually by publishing a list of reviewers in April.

HOW SHOULD I REVIEW THE PAPER?

Many journals have structured ways in which they require you to submit your review, or templates that require answers as part of your review. We prefer to allow you to write your review as you wish.

We are basically asking you to perform a critical appraisal of the article. This is a skill that does take some practice and does benefit from some reading of how to perform critical appraisal. This is because it is often very simple to identify problems that are written in an article, but it is much more difficult to spot issues that are missing from a paper. A critical appraisal checklist helps you to identify these missing items. For example, a paper may state that 5% of their cohort did not have any data for their CCS angina scoring. You may identify this easily as a problem on your first reading. However, you look down a critical appraisal checklist and this checklist asks you how the outcome measure was defined and was it valid, reliable and reproducible. You realize that the paper has not given you any details as to how the CCS angina scores were being collected in this study.

Critical appraisal checklists are widely available for free at the click of a mouse. We have provided a link at the end of this article to a full list of them along with many other links under ‘interesting resources’.

For your information, we give you two areas to document your review. The ‘Comments for the Editor’ is your main area for helping us to decide on the quality of the paper and for making your recommendations. The ‘Comments for the Authors’ is there to either ask questions of the authors or suggest changes that you would like them to make. In order to give you an idea of what we expect, we have also provided an example of a good review and a bad review in Appendix 1.

We will now go through some of the important categories that we like to see addressed in reviews:

  • (1) Objectives of the paper and importance of the research question.

Have the authors clearly stated the aims of their paper and how important is this question? How well did they answer this question in their paper? Do you know of other papers that have also addressed this issue that have not been referenced? It is always nice to see a one or two line précis of what you feel is the main objective of the study at the beginning of your review.

  • (2) Study group, methods and sample size.

How did they get their patients? Is this a consecutive and comprehensive series of patients from a particular timeframe, or have they excluded important people from the study? Were all the operations done in the same way?

If they are describing a novel operation or a new way of doing an operation, have they given you enough information about how they did it. This is important as the reader will be wanting to know if they should change or try out this new technique, and will therefore want a good amount of detail on it. You may be looking for reasons as to why their results are better, and they may simply have improved their perioperative care elsewhere; thus data, such as cardioplegia, bypass temperature and anaesthetic medications, are useful.

Sample size is a very important issue in papers that do not report significant differences (i.e. ‘we could find no adverse effect on mortality with this new operation’). There should always be a sample size calculation in the paper. Without a sample size calculation in a paper that finds no difference in their groups, how do you know if you have got enough patients to exclude the possibility of a difference actually being found if there had been a few more patients? It is the sample size that tells you this. The sample size is as important in a negative study as a P-value is in a study that does find a difference.

If you have doubts about the size of the study, you can even have a go at working out the sample size using one of the many sample size calculators available online (see Interesting resources) to see if they have included enough patients.

  • (3) Outcome measures.

Outcome measures such as mortality may seem easy to use, but measures like ‘improvement in angina’, ‘pain scores’ and even ‘cerebrovascular accident’ may be more difficult to measure. Has the paper measured what you want to see as an outcome measure and did they do it properly? Is the outcome measure reliable, reproducible and valid? The operating surgeon asking if a patient, 1 week after coronary artery bypass grafting, still has angina may get a different answer to a research nurse. Outcome measures that are much lower or higher than that which you expect may not have been correctly measured. If the stroke rate for total arch replacement with deep hypothermic circulatory arrest is 0.5%, you may want to question how they measured their outcome. How did they verify that patients did not have a stroke? Some databases just collect averse outcomes when the database managers are made aware of them. This is not a reliable way to verify that the people have not had a problem such as cardiovascular accident or pulmonary embolism once they have left the hospital, and some kind of prospective verification may be required.

  • (4) Presentation of results.

How are the results presented? As a general guide, the text tells the story, the tables provide the data and the figures illustrate the story. By convention, Table 1 would usually be a table of demographics of the patients in the study, telling the readers exactly who these patients were. Can you compare the patients with your own? Have they given us the EuroSCORE in each group?

A very interesting exercise is to have a go at adding up a few rows or columns to check that the figures are correct. It is amazing how often you can find a simple adding up error if you look for them. Often only one author has calculated these and no other authors have cross-checked the results.

  • (5) Statistics.

First of all, please do not feel that you cannot be a reviewer if you feel weak on statistics. If you have any queries on the tests used (i.e. you don't recognize them), then feel free to suggest that we get a statistician to check this part of the paper. We recommend that you look at the results of the paper. If they make sense to you then often the statistics used are less important.

Some of the most common papers we receive are cohort studies. Perhaps the surgeon has changed the way he gives cardioplegia and is writing a paper using logistic regression to prove that despite the difference in age, BMI and diabetes, cardioplegia reduced the number of patients who stayed in the ITU more than 2 days. You do not need to be able to do logistic regression to be able to review the paper. The most important things with logistic regression are the variables that are put in the model. If an author misses an important variable (in our example, if his bypass time was 15 min less with his new cardioplegia technique) then the results will make the cardioplegia method look better, when actually something else was making the difference. Therefore, you must have a good look at what was put in and mention any variables that were not entered that could have accounted for the difference.

We will let you into a secret now. The statistician will not be able to check the logistic regression model very thoroughly either. The statistician can make sure that the authors used the right words to describe the methodology and that they accounted for which variables were included or excluded, but they will not be able to check the mathematics of the results. The authors have to be trusted to have calculated these correctly as there is no way to check the results without the original data.

  • (6) Discussion and interpretation.

Let us know what you think of the discussion and interpretation of the results. A good paper will have a well-researched discussion, referencing all other recent papers and comparing results. Good reviewers will often quickly put the topic of the paper under review into Medline and let us know of any other papers that have previously been published in this area.

Let us know what you think of their interpretation of the results. Is there a reasonable conclusion based on the results presented?

  • (7) Simple but important: grammar and spelling.

We greatly appreciate it if you could identify any major grammatical or serious typographical errors. We acknowledge that while we use the English language for our publication in common with the majority of Biomedical journals, the majority of our authors and readers do not speak English as their first language. We do not want to discriminate based on first language and thus this takes a little more work on all our parts in order to make sure that all major language corrections are made prior to publication.

  • (8) Different types of submitted article.

Not all our submitted papers are either cohort studies or randomized controlled trials (RCTs). In fact, one of our most common categories is the case report. We now almost always only accept these into the ICVTS unless they are a report of an exceptional breakthrough, but as one of our most common submissions, it is likely that you may be asked to review case reports. We also accept papers under the categories: ‘How-to-do-it’; ‘Images in Cardio-thoracic Surgery’; ‘Bail-out Procedures’; and several other single or low case number types of articles.

Many of the above categories do not apply for these. However, it is still very important to read their paper fully, to advise on any typographical errors in the paper and then to check that the authors have fully explained the case or procedure to you in the detail that you would expect. We would then greatly appreciate it if you could have a quick look on Medline or other literature resource to verify the rarity or special nature of this case. A good case report will have a decent review of the literature referenced in the discussion. These papers should not take you so long to review, but a detailed look at them with a check of the literature and your view on its importance is most appreciated.

  • (9) Ethics and probity.

Contrary to the common perception, neither editors, reviewers nor publishers can effectively police papers for fabrication or embellishment of the data in a paper. If you ever have any doubts about the veracity of the data in a paper that you have either read or reviewed, we would encourage correspondence directly with the Editor-in-Chief. More commonly, authors will forget to mention ethical approval – whether it was needed or waived. It is a responsibility of all authors to address this in their papers even if performing a retrospective cohort study. Also, it is important that if you feel you have a conflict of interest with a paper, that you should declare this and decline the review request.

Finally, you will always find several issues, problems or errors in the paper you are reviewing ranging from the trivial to the fatal. Each time you raise an issue, you may also like to mention how important you think it is. Thus, typographical errors you may mention as minor errors, but if you have found that a paper that said it was an RCT in fact just took two surgeons' operating lists and tried to argue that the referral system was random, and thus the patients were randomly allocated, then this would be a fatal flaw in the paper.

WHAT YOU SHOULD KNOW ABOUT THE EJCTS/ICVTS REVIEW SYSTEM

To many, the automated submission system is the face of the EJCTS/ICVTS. However, every paper is seen at every stage by people and it is only some of the e-mail that look automated in order to speed our work up. The EJCTS/ICVTS office consists of four full-time staff, who work on a daily basis looking at your papers, with the Editor-in-Chief, who is himself also a full-time surgeon. The office is supported by Associate Editors, an Editorial Board and Assistant Editors who all give their time to the EJCTS/ICVTS for free to look through and review your papers.

Once you submit a paper, a member of the office checks that it conforms to the submission rules, including the word count, and then passes on to an Associate Editor for the domain that you selected when you were asked to categorize your paper. The Associate Editor then has 7 days to select two reviewers. The submission system allows a list of potential reviewers to be generated with knowledge of the areas that you selected when you categorized your paper. This is paired up with the areas of expertise flagged up by reviewers in their personal information area.

The Associate Editor has a list of potential reviewers. This list consists of information regarding how many papers that person has reviewed in the past, how many they are actively reviewing and their average turnaround time.

Once two reviewers are selected, an e-mail is sent to the reviewer to ask them to perform the review and they have 2 weeks to reply. Once the two reviews are back, the Associate Editor has 14 days to read the reviews and also to provide a recommendation. The paper is then sent to the Editor-in-Chief, who personally sends the paper back to the authors with a decision.

On resubmission the paper goes directly to the Associate Editor, who reads the responses to the reviews and then passes his recommendations back to the Editor-in-Chief, who personally sends the paper back to the authors with the final decision. The reviewers do not routinely see author's responses to the reviews.

WHAT DO THE EDITORS EXPECT OF MY REVIEW

Firstly, we would really appreciate timely responses and reviews. As you will know, delays in the time to publication are the primary frustration for authors. Try not to wait until the day before the deadline, and if you cannot do the review, just let us know when you get the review request.

Secondly, have a good read of the whole paper including the tables, figures and reference list.

It is difficult to suggest how long to spend on a review, but perhaps 30 min to an hour would be a reasonable estimate. Many of our best reviewers clearly spend more time than this on their reviews, and it is much appreciated by the Editors. Of note, the British Medical Journal (BMJ) recommends that their reviewers spend between 2 and 5 h on their review and a Journal of the American Medical Association (JAMA) survey reported an average time spent on their reviews was 2.3 h [2].

Thirdly, we would like you to be courteous to the authors and write a review that you would be happy to receive yourself or happy to read out in front of them if you were in a room with them.

Finally, please do not include your recommendation within your ‘Comments for the Authors’. Only include it in the ‘Comments for the Editor’.

WHAT IS KNOWN ABOUT PEER REVIEW AND HOW COULD WE DO IT DIFFERENTLY?

‘If peer review was a drug it would never be allowed onto the market’, said Drummond Rennie, deputy editor of the JAMA and intellectual father of the international congresses of peer review. ‘Peer review would not get onto the market because we have no convincing evidence of its benefits but a lot of evidence of its flaws [3]’.

There have been four international conferences of peer review [4–6], three Cochrane reviews on peer review and 50 randomized trials [7–9]. So what have we learnt? Repeated studies show that agreement among reviewers is relatively poor [10–14]. In one RCT, a recently accepted manuscript was changed and eight mistakes or areas of weakness were inserted. It was then sent out to 420 reviewers for the JAMA. The average number of mistakes identified was two and 20% of reviewers identified none [15]. In another study in the USA, 12 papers that had been accepted were resubmitted to the same journal, but with the author, institutions and names changed to the ones that were not well known. Eight of the previously accepted papers were rejected [16].

A study in North America [17] that looked at the quality of reviews found that the best reviews were not performed by the most senior clinicians but actually by those with training in epidemiology, people under 40 and those known to the editors.

It is also very expensive and time-consuming. The Research Information Network estimated that in terms of clinicians' time performing reviews, this costs institutions $1.9 billion worldwide every year, and the review process is well known to be a slow system that can take 6 months or more to get a publication in many journals.

Thus, we know that the peer review system provides relatively poor agreement as to what makes a good paper. It identifies mistakes poorly and is very expensive to the institutions of those people who chose to undertake the reviews. It is time-consuming, slow at getting papers into print, and it can be biased against people from minor institutions. Thus, these shortcomings have encouraged editors to look at other models that they could follow.

OTHER MODELS OF PEER REVIEW

Open peer review

The BMJ is a strong advocate of open peer review whereby the names of the reviewers are known to the authors [18–21]. Fiona Godlee, the editor of the BMJ, states that the main argument is an ethical one. Reviewers are making or helping to make judgements that are of great importance to authors. None of us would want to be judged for a serious offence by an anonymous unseen judge. Justice has to be done and be seen to be done. Peer reviewers should thus be identified. Furthermore, the BMJ published evidence in the form of an RCT to show that it does not adversely affect the quality of the review [15, 22]. However on the other side of the argument, these randomized trials also showed no advantage to signed reviews and detractors argue that there is significant concern that reviewers will be inhibited from an open response if they feel that the authors will find out their name. In surveys, up to 80% of reviewers said that they would prefer to stay anonymous. Thus, we do not currently advocate open peer review in the EJCTS or ICVTS.

E-prints and online repository journals

The physics community has been leading the way with posting ‘e-prints’ (effectively drafts of papers) on an open website (www.arxiv.org), inviting everybody to respond and then later submitting the paper to a formal journal. Everybody thus has a chance to read a study long before it is published in paper form. The BMJ and the Lancet experimented with this format [23], but it didn't take off in a big way for these journals. It has been proposed that the reason that authors do not like this model in biomedical journals is the risk of public misinterpretation of results, or media interest prior to adequate vetting of the article.

The Public Library of Science (PLoS) model of online repository journals was started in 2006. It calls itself an open access journal, trying to remove all barriers (for example, subscription costs) and to publish ‘all rigorous science’, placing an ‘emphasis on research validity over potential impact’ (www.plos.org). It accepts nearly 70% of all submissions and has a brief review system that simply looks for errors rather than questioning the research methodology.

Electronic post-publication peer review

This is used by the Cochrane Collaboration although it has also been slow to take off in areas other than systematic review. Essentially, the paper is published without review electronically. People are then welcome to review or make comments at any time, and the authors are in turn encouraged to make amendments in response to those comments in a dynamic and prompt way as they come in. This process has long been predicted to be the future of all medical publishing in the Internet era, but has yet to materialize as a workable model.

The journal ICVTS could be seen as a great success in novel publishing techniques, using its interactive ‘E-commenting’ format. Although it is E-published after review, it does allow for further criticism in the comments section and therefore may be seen as an important innovation in this area, being a hybrid of peer review and post-publication peer review.

Just doing what we do better

David Hearse, the former editor of Cardiovascular Research, re-engineered his journal's peer review system. He reduced the time to make a decision from 3 months (and often longer) to 3 weeks. He did this by sending out the paper to three reviewers on the day it arrived, rewarding the reviewers (with a music CD) if they responded within 2 weeks, and making a decision on the basis of two of the three reviewers' opinions. He also dramatically increased the number of reviewers on his database from 200 to 2000 and changed them from being 80% British to 80% from outside Britain. As a result, he increased his submission rate from 200 to 2000 and greatly increased his impact factor. Thus, novel approaches to the well-known system of peer review may also be rewarding to look at in the future.

Conclusions

Despite the shortcomings of the peer review system highlighted above, this system is still highly respected and, perhaps as a result, the number of peer-reviewed journals has increased by one-third in the last 10 years.

The recent UK Government Enquiry into the peer review system concluded that ‘Peer review in scholarly publishing, in one form or another, is crucial to the ‘reputation and reliability of scientific research … despite the many criticisms and the little solid evidence on its efficacy, editorial peer review is considered by many as important and not something that can be dispensed with’ [1].

It still remains in high regard with authors, reviewers, politicians and the general public, and in the absence of any other system that can verify the integrity and quality of a paper, our aim at the EJCTS/ICVTS is to continue to strengthen our review system; to continue to innovate; but mainly to ensure that our standards are as high as possible in order to help authors to publish the highest quality articles in Cardio-Thoracic and Vascular Surgery.

INTERESTING RESOURCES

  1. You can find an outstanding series of critical appraisal checklists on every type of study to use when reviewing a paper at the bestbets.org website: http://www.bestbets.org/links/BET-CA-worksheets.php

  2. An excellent series of resources to help reviewers, published by the BMJ: http://resources.bmj.com/bmj/reviewers/training-materials

  3. The House of Commons Science and Technology Committee report on Peer review in scientific publications: http://www.publications.parliament.uk/pa/cm201012/cmselect/cmsctech/856/856.pdf

  4. A good sample size calculator: http://www.dssresearch.com/KnowledgeCenter/toolkitcalculators/samplesizecalculators.aspx

  5. The 6th International Congress on peer review and biomedical publication: http://www.ama-assn.org/public/peer/peerhome.htm

  6. The World Association of Medical Editors Peer review information: http://www.wame.org/wame-by-topic#peer

APPENDIX 1

EXAMPLE OF A GOOD REVIEW

Comments for the Editor

Thank you for asking me to review this paper entitled ‘Additional sternal wires to reduce the incidence of mediastinitis’.

This is a paper that reviews the last 7 years of cardiac surgical operations at their institution (6500 patients) and concludes that, accounting for the differences between groups, inserting seven or more wires significantly reduces the incidence of mediastinitis when compared with six wires.

While there are many papers on this subject (see a review that they missed Khasati et al., ICVTS 2004;3:191–4) and papers on different types of wire techniques and other ways to close the chest, e.g. nitillium clips, sternaband etc.) potentially this might make a good paper in the EJCTS due to its large size and the way in which the two groups were nicely balanced. In addition, this is an important topic for us all and they present a simple intervention that might potentially reduce the incidence of mediastinitis. However, I have some queries and suggestions that the authors should address first.

First, I have a question about the study cohort. Was this all the patients operated on in this timeframe? The authors did not mention any excluded patients but from our own experience we would occasionally leave a chest open if the patient was unstable, or use a Robicsek technique figure of eight wires. No such patients were mentioned in this paper. Also the authors got positive results but why did they choose 7 years rather than a longer period. This was not mentioned.

I have a major concern about the outcome measure of mediastinitis in this study. They report mediastinitis requiring surgery, mediastinitis not requiring surgery or superficial sternal infection. From our own experience, some patients have surgery for sterile dehiscence. Secondly, I am not sure that their definition of mediastinitis not requiring surgery was fully reported. Did all patients in this category reach the ICD-10 definition of mediastinitis and how many were microbiologically proven? This is a very important question as will directly impact the number of patients in each group and as the numbers suffering mediastinitis are relatively small, even small mistakes in the categories could render the outcomes insignificant.

The results and tables are nicely presented. In Table 1, I didn't however see the EuroSCORE in each group and also smoking history. In our experience, current smoking is a risk factor for mediastinitis.

I am not an expert in logistic regression, but they do not seem to have put many variables into the model. In particular, I am concerned that as diabetes was not very significant in the univariate analysis, this did not get entered into the model. In our experience, this is a very important factor for mediastinitis. It may be that they tried to put it into four categories, rather than diabetes or no diabetes. I worry that if it is put into the model, the wire technique may become less important.

Also there was mention of the fact that two surgeons out of the seven always use eight wires and five always use six wires, but there is no analysis of the data by surgeon. I am sure that this needs to be addressed.

In Table 2, the univariate analysis, in the patients with previous MI, there seem to be 2354 in the six-wire group and 4046 patients in the seven-wire group. This adds up to 6400. Where have 100 patients gone, there may have been an adding up error here?

The discussion seems reasonable and well referenced and I couldn't see any other papers that they had missed other than the review mentioned earlier. Also their conclusion seems reasonable.

With regard to minor points, they have written mediastinis page 3, line 140 and dehisence page 4 line 180. These need correcting.

Thus, once they have responded to these questions, I am sure you will be able to consider it for publication in the EJCTS.

Comments for the Authors

Thank you for submitting this excellent article to the EJCTS. I was pleased to receive it as a reviewer.

It is certainly an interesting and important topic and you have amassed a large number of patients who have had either a six-wire closure or an eight-wire closure making your comparison interesting, especially as the groups are fairly even in number.

I have the following questions for you which I believe need to be addressed prior to publication:

Why did you choose that particular timeframe? I couldn't see the reason for not looking back further.

In our own practice, we would occasionally leave a chest open if the patient was unstable, or use a Robicsek technique figure of eight wires. No such patients are mentioned in your paper. Does your centre never vary from this technique or did you actually exclude patients not having a simple wire technique.

I am concerned about the definitions of your mediastinitis groups in the study. You did not mention sterile sternal dehiscence. We have several of these but are these patients all in your mediastinitis requiring surgery category? There is an official International Classification of Diseases (ICD-10) definition of mediastinitis. Could you take a look at this and see how your outcome measure definition compares with their definition. I wonder if you could define your outcome measures much more tightly including the microbiologically proven and non-microbiologically proven cases.

In your Table 1, I didn't see EuroSCORE or smoking history. Do you have these in your database?

I would also like you to check the adding up in the previous MI category. I am missing 100 patients. Can you verify that the other columns in this table all add up correctly also?

I am also concerned that you did not have diabetes, or the surgeon in your logistic regression analysis. Surely, these will be potentially very important variables in predicting mediastinitis, and should be included.

With regard to minor points, you have written mediastinis page 3, line 140 and dehisence page 4 line 180. These need correcting.

Good luck with your paper, and thanks again for submitting it.

EXAMPLE OF A BAD REVIEW

Comments for the Editor

This is a paper on additional sternal wires for protecting against mediastinitis.

This is a nicely done article and I recommend it for publication.

Comments for the Authors

This is a paper on additional sternal wires for protecting against mediastinitis.

This is a nicely done article and I recommend it for publication.

REFERENCES

1
House of Commons Science and Technology Committee
Peer review in scientific publications
2011
London
The Stationery Office Limited
 
Eighth Report of Session 2010–12, Volume I: Report, together with formal minutes, oral and written evidence, HC 856. Published by the Authority of the House of Commons
2
Yankauer
A
Who are the peer reviewers and how much do they review?
JAMA
 , 
1990
, vol. 
263
 (pg. 
1338
-
40
)
3
Smith
R
Classical peer review: an empty gun
Breast Cancer Res
 , 
2010
, vol. 
12
 
Suppl. 4
pg. 
S13
 
4
Rennie
D
Guarding the guardians: a conference on editorial peer review
JAMA
 , 
1986
, vol. 
256
 (pg. 
2391
-
2
)
5
Rennie
D
Fourth International Congress on Peer Review in Biomedical Publication
JAMA
 , 
2002
, vol. 
287
 (pg. 
2759
-
60
)
6
Rennie
D
Flanagin
A
The Second International Congress on Peer Review in Biomedical Publication
JAMA
 , 
1994
, vol. 
272
 pg. 
91
 
7
Demicheli
V
Di Pietrantonj
C
Peer review for improving the quality of grant applications
Cochrane Database Syst Rev
 , 
2007
pg. 
MR000003
 
8
Jefferson
T
Rudin
M
Brodney Folse
S
Davidoff
F
Editorial peer review for improving the quality of reports of biomedical studies
Cochrane Database Syst Rev
 , 
2007
pg. 
MR000016
 
9
Wager
E
Middleton
P
Technical editing of research reports in biomedical journals
Cochrane Database Syst Rev
 , 
2008
pg. 
MR000002
 
10
Callaham
ML
Baxt
WG
Waeckerle
JF
Wears
RL
Reliability of editors’ subjective quality ratings of peer reviews of manuscripts
JAMA
 , 
1998
, vol. 
280
 (pg. 
229
-
31
)
11
Black
N
van Rooyen
S
Godlee
F
Smith
R
Evans
S
What makes a good reviewer and a good review for a general medical journal?
JAMA
 , 
1998
, vol. 
280
 (pg. 
231
-
3
)
12
Cicchetti
DV
Referees, editors, and publication practices: improving the reliability and usefulness of the peer review system
Sci Eng Ethics
 , 
1997
, vol. 
3
 (pg. 
51
-
62
)
13
Rothwell
PM
Martyn
CN
Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone?
Brain
 , 
2000
, vol. 
123
 
Pt 9
(pg. 
1964
-
9
)
14
Ernst
E
Saradeth
T
Resch
KL
Drawbacks of peer review
Nature
 , 
1993
, vol. 
363
 pg. 
296
 
15
Godlee
F
Gale
CR
Martyn
CN
Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial
JAMA
 , 
1998
, vol. 
280
 (pg. 
237
-
40
)
16
Peters
DP
Ceci
SJ
Peer review practices of psychological journals: the fate of published articles, submitted again
Behav Brain Sci
 , 
1982
, vol. 
5
 (pg. 
187
-
255
)
17
Stossel
TP
Reviewer status and review quality. Experience of the Journal of Clinical Investigation
N Engl J Med
 , 
1985
, vol. 
312
 (pg. 
658
-
9
)
18
Smith
R
Opening up BMJ peer review
BMJ
 , 
1999
, vol. 
318
 (pg. 
4
-
5
)
19
Anderson
RH
Anonymity of reviewers
Cardiovasc Res
 , 
1994
, vol. 
28
 pg. 
1735
 
20
Fabiato
A
Anonymity of reviewers
Cardiovasc Res
 , 
1994
, vol. 
28
 (pg. 
1134
-
39
; discussion 1140–5
21
Rennie
D
Freedom and responsibility in medical publication: setting the balance right
JAMA
 , 
1998
, vol. 
280
 (pg. 
300
-
2
)
22
van Rooyen
S
Godlee
F
Evans
S
Smith
R
Black
N
Effect of blinding and unmasking on the quality of peer review: a randomized trial
JAMA
 , 
1998
, vol. 
280
 (pg. 
234
-
7
)
23
Delamothe
T
Smith
R
Keller
MA
Sack
J
Witscher
B
Netprints: the next phase in the evolution of biomedical publishing
BMJ
 , 
1999
, vol. 
319
 (pg. 
1515
-
6
)