How to choose where best to deposit your article before submitting it to a PCI?

There are many platforms on which you can deposit your preprint (let’s call it a “preprint” even if you don’t intend to publish it in a journal). These platforms are of three types: preprint and e-print servers, institutional repositories and open archives. 

You should carefully consider the characteristics of these platforms before depositing the preprint you want to submit to a PCI. In particular, you should ask yourself the following questions:

  • Does my institution recommend a particular archive for preprints? 

Research institutes or universities may recommend that affiliated researchers deposit preprints in their institutional repository, or may have a specific directory in an open archive.

  • Is the scientific scope of my article appropriate for the platform?

Some preprint servers, repositories and archives host papers from a specific field: EcoEvoRxiv, PaleorXiv, HAL-SHS, etc… 

arXiv was long considered to be restricted to mathematics, physics and computer sciences. However, this is no longer the case, as it now accepts any article including quantitative analyses in biology, finance, statistics, electrical engineering, systems science, and economics. Note that almost all biology papers include quantitative analyses.

  • Is my preprint of a type compatible with the platform?

Most preprint servers, repositories and archives accept any sort of preprint, but some impose restrictions. For instance, bioRxiv is restricted to preprints presenting “Results” and does not host reviews, opinions, or perspectives. 

  • Will the peer-reviewed status of my article be clearly displayed on the platform?

To our knowledge, no preprint server, repository or archive correctly displays the peer-review status of their content yet. bioRxiv shows annotations/reviews from Review Commons in the upper right corner of the corresponding preprint home page. bioRxiv also displays a link to PCI reviews and recommendations in the “Preprint discussion sites covering this article” section, found just below the article’s abstract and copyright statement. However, for these preprints, as for non-evaluated preprints, bioRxiv displays a sentence stating that “This article is a preprint and has not been certified by peer review“. The same type of warning is displayed in and ResearchSquare. This sentence is only removed for preprints that are subsequently published in journals. This sentence will not be removed even if your preprint is recommended by PCI, giving readers the incorrect impression that your preprint has not been peer-reviewed. Reviews and endorsements of bioRxiv preprints are also listed on the Hive platform, developed by eLife

If you want to know whether a specific preprint has been recommended by a PCI, then you should read this blog post

  • Do I want to post a final version of my preprint if it is published in a journal?

The “author accepted manuscript” or “postprint”, i.e. the unformatted version of the accepted manuscript that still belongs to the authors (i.e. the version before copyright transfer), may be used for green Open Access (“self-archiving”). 

It may not be possible to deposit postprints on some servers, repositories and archives. For instance, bioRxiv does not allow authors to upload postprints (, whereas HAL and arXiv do. You can read more on the subject at this post

Most preprint servers, repositories and archives accept all articles for which the authors hold copyright, and, in cases of submission to a journal, if the journal’s copyright transfer agreement does not conflict with the servers’ licenses. The Sherpa Romeo website is a useful resource for checking whether and when the journal to which you submitted your preprint allows you to archive the postprint.

  • Is the platform well-indexed and are its articles easy to find?

This is an extremely important question. Some platforms are well-indexed (e.g., arXiv, HAL, bioRxiv, OSF-preprints and its branded preprint servers, etc.) by major search engines (Google, Google Scholar), whereas others are currently less visible (e.g., Zenodo). In general, institutional repositories and open archives are not referenced in scientific databases (e.g. PubMed, Europe PMC, Dimensions), but preprint servers may be (e.g. arXiv is indexed in Dimensions but not in Europe PMC, and only bioRxiv preprints with NIH fundings are index in PubMed). 

These technical features are continually evolving, and it is, therefore, a good idea to do a quick search for updates before posting your article.

A number of platforms are also compatible, to various extents, with the extremely useful plugin of PubPeer (see this blog post) . 

  • Do I want to bypass the classic publication system?

Some servers seek to facilitate the transfer of preprints to journals by considering themselves as a preliminary step in the publication process (e.g., Research Square). Others, such as arXiv or OSF-preprints, have taken a more independent position, seeking to disseminate research results as quickly as possible, without considering the fate of preprints.

By answering these questions (along with others relating to metadata richness, services provided, full html availability, etc.), you should be able to make the best choice of platform for your preprint. This choice is an important one, because switching servers/repositories after an initial deposition is discouraged, to avoid the creation of multiple DOIs for the same article. Moreover, there is usually an “indexing” advantage to the first server used for a particular article on well-known search engines. It is, therefore, worth taking the time to make the right choice for your work.

  • Additional criteria and modifications suggested by colleagues after the first publication of the post:

Commitment to preservation of works. For example, at the very least, authors are advised not to choose a location where the terms state that works can be deleted/removed without notice.

There was an inaccuracy in the figure (corrected 22 Sept 2020): @arxiv does not limit articles to those not published in journals, authors may post e-prints of published articles if they retained the copyright or have the license to do so from the publisher.

How can you know whether or not a preprint has been recommended by a PCI?

You’re looking for articles in Google Scholar, Google, Europe PMC, Zenodo or bioRxiv. You retrieve a result list of articles containing preprints. You want to know whether the preprints have been recommended by a PCI after peer-review or not. How can you do that? There are several options:

1- Go to the PCI X web site and search the article


2- In Europe PMC, check the REVIEW section. In bioRxiv, check the “Preprint discussion sites covering this article:” section below the abstract.

3- Install the PubPeer browser extension ( in your preferred web browser. In Google Scholar, Google, and bioRxiv, just check the green banner. In Zenodo, you have to go on the article page to check the green banner in the “Cite as” section.

Our advice: Install the PubPeer browser extension ( in your preferred web browser, use it and check the green banner!

Screenshots of Google, Google Scholar and bioRxiv showing the green banners of PubPeer

The PubPeer extension does not work yet in arXiv, in the result page of Europe PMC or Zenodo, in OSF Preprint, or in PrePubmed. But it works in Pubmed, in Web of Science, and in most journal websites. It also works in the reference manager Zotero. In Hal, the preprints recommended by a PCI will soon be tagged.

4- If the articles are bioRxiv preprints then you can use a new platform: The Hive. It gathers in the same place preprints (of bioRxiv), reviews and endorsements from different communities of editors and reviewers, including PCI.

Submit your preregistration to Peer Community In for peer review!

By Corina Logan and Dieter Lukas

In 2018, we were excited to implement the peer review of preregistrations at Peer Community In (PCI). PCI is a community-driven initiative to provide free and transparent assessment of research articles. The ease and clarity of their approach of publishing reviews and recommendations of articles that are deposited elsewhere appeared well-suited to us to implement a flexible approach to the peer review of preregistrations. Preregistrations are when you write your study plan (hypotheses, methods, and analysis plan) before you collect the data. The new feature we offer at PCI is the ability to submit your preregistrations for pre-study peer review. After the study is complete, it undergoes a second (post-study) peer review to double check that you did what you said you were going to do and, if not, whether the deviations maintained the scientific validity of the research. We’re excited about this way of conducting verifiable research and we’re also always learning so if you have suggestions on how we can improve, please contact us.

Figure 1. A. The traditional way of conducting research where, after payment by readers and/or authors, readers are only able to see the final draft. B. Registered reports as they are often implemented. For both A and B, it is possible to publish the peer reviews alongside the final article at some journals, and it is the author’s choice whether to post a preprint. C. The solutions we’ve implemented with preregistrations at PCI (see Table 1) allow anyone to verify the entire research process (study plan, peer reviews, and all revisions of the preprint, including the final version) and evaluate the quality of the research for themselves, at no cost to authors or readers.

Why submit a preregistration for pre-study peer review? We tend to think of it this way: this piece of research is going to undergo peer review at some point. Why not get it peer reviewed before we collect the data when we can actually change things? Having your study pre-study peer reviewed can help you avoid several risks, which also saves you loads of time and resources by making the research scientifically valid before you’ve invested in actually conducting the research (Table 1). There is a great overview article on all of this by Nosek and colleagues (2018).

Table 1. Some of the risks solved by the track for the peer review of preregistrations at PCI.

RiskPeer review of preregistration at PCI
HARKing (Hypothesizing After Results are Known)My hypotheses and predictions are pre-registered so it is clear which ideas were developed after data collection and analysis
P-hacking (analyzing the data in as many ways as possible until a significant p-value is found)The analysis plan is preregistered and peer reviewed which means I would need to give valid scientific reasons for changing a method or analysis
Methods can’t answer the research questionsPeer reviewers help me by pointing out potential limitations of my preregistered methods and suggest alternatives
Have to add post-hoc predictions and hypotheses after analysesI explore the whole logical space at the preregistration stage by providing alternative predictions
Only significant results are publishedMy results advance research independent of significance
Unclear exactly what data is needed until after data collection has startedI, my team, and the reviewers, discuss and consider all variables and potential confounds before data collection (usually) and before analyses (always) begin
Fighting over authorship positionAuthorship contributions are listed in the preregistration and order is agreed upon initially and can also change as some people contribute more or less than expected

What’s the difference between the peer-review of preregistrations at PCI and Registered Reports at a journal?

Both undergo pre- and post-study peer review in the same way, but we made preregistrations at PCI more flexible than Registered Reports (RRs) currently are. The main thing we hear from researchers is that they want a more flexible system because things always change along the way and they need to be able to control when data collection can start. So we made that happen at PCI. 

Table 2. Common fears researchers have about pre-study peer review and how the solutions at PCI’s track for the peer review of preregistrations addresses these fears.

Researcher fearsPeer review of preregistration at PCI
Preregistrations are inflexibleI revise my preregistrations all the time! I say what changed and why so everyone can see what happened and at what stage (e.g., pre-data collection, post-data collection, pre-analysis, all noted in the commit message at GitHub). The point is to make the process transparent so anyone can see what happened at every stage. As long as the changes are result in keeping the research scientifically valid, it should be no problem to pass the post-study peer review
I have to wait until the preregistration has passed peer review before collecting dataWhen you submit your preregistration to PCI, you can say in the cover letter when the data collection is going to start and ask if it would be possible to finish the peer review before then. At this point, they have a heads up and if data collection starts when it is in the middle of peer review, it won’t be a surprise to anyone. PCI Ecology has a pretty quick turn around (1-4 months) so I have at least received the first round of reviews and revised before collecting the data for most of the preregistrations I have submitted. 
I can’t base a preregistration on data that have already been collectedYou can submit a preregistration that uses secondary data – data that are already being (or were) collected for other hypotheses, but you make new hypotheses and preregister them, wait for the preregistration to pass peer review at PCI, and then analyze the data.
One preregistration must result in one articleMy research is long-term and I want to preregister my big ideas and how they fit together. As such, one preregistration might result in multiple post-study articles and the process at PCI accommodates this.
I can’t submit a reproducible manuscriptYou can submit your preregistration in as reproducible of a format as you wish (e.g., rmarkdown format). Just make sure there is a version that is easily readable for the Recommender and reviewers. I love writing my preregistrations in rmarkdown because I have just one file that undergoes changes from the hypothesis stage to incorporating the final results and discussion – it’s a living document. There is a history that goes with it and anyone can view it at my GitHub repository. It’s also super easy to automatically turn an rmarkdown file into other file formats (e.g., PDF, Word, etc.). I’m not aware of many journals that allow one to submit reproducible manuscripts
I have to write almost the whole article at the preregistration stageYou don’t have to write the whole article for your preregistration. PCI requires only an outline: write an abstract, list your hypothesis, methods, and analysis plan. You can write your intro and flesh out all of the details later when you’re preparing it for post-study peer review. Giving these details in the outline is enough for researchers who are not associated with my project to understand what I’m testing and why
What if I don’t like the journal my preregistration is at by the time the study is complete?The publishing landscape changes quickly and I’m not willing to commit to putting a Registered Report at a particular journal when it could take a few years to conduct the research. Committing to a journal so far in advance means the journal might not suit my strict publishing ethics by the time the article is finished. PCI isn’t a journal, it’s a (free) peer review service, therefore the changes it might undergo won’t conflict with my future publishing ethics
I do exploratory research – what is there to preregister?Some of the research I do is exploratory (see an example) and one thing I’ve noticed is that I’ve never gone exploring without a reason. All you have to do is just write this reason down and that can be your preregistration. If you also know something about the types of variables you want to look at or if you just have an idea of what they are, then you can add those too.

What does your preregistration need to be submittable to PCI? It needs to be publicly available online (just like the preprints that PCI reviews), version controlled (so there is a time and date stamp whenever you update it because the reviewers and Recommender will have to track the changes over the course of the study), and have a DOI (digital object identifier). One way of doing this is to upload your file to OSF as a preregistration or to your institutional repository, making it publicly available, where it will get a DOI that you can submit to PCI. OSF is version tracked in the sense that it records when the files were re-uploaded, but not what changes were made. You’ll need to check with your institutional repository to see whether they do version tracking. We wanted a better version tracking system for our preregistrations (which also accommodates multiple authors making changes at the same time), so we opted for a different route: version-controlled reproducible manuscripts.

Learning how to submit my preregistrations as version-controlled reproducible manuscripts to PCI

We’ve now submitted 10 preregistrations to PCI Ecology for pre-study peer review and, with the help of PCI founders, Recommenders, reviewers, my research team, and other researchers, we worked out a way to make these submissions easy for the Recommenders and reviewers to read (using HTML files), while maintaining a verifiable research process at GitHub via our rmarkdown (Rmd) files. The process we’ve developed is just one of many ways it could work – other ways would need to be developed according to the needs of the authors – so we’ll share with you in detail how we do this so you can use as much or as little of this process as you like. 

We write our manuscripts in rmarkdown, a free-to-use open source format, where we combine the manuscript text with the code for the analyses all in one document (saving tons of time in looking around for various bits of code in random folders with random file names). We then put the Rmd file at GitHub, which has the benefit of being fully version tracked with time and date stamps and track changes (all of this happens automatically and multiple people can edit at the same time). (Note: we connect RStudio with the GitHub repository using GitHub Desktop, which makes for easy synching.)

Table 3. We list a few of the main reasons why we developed this process of submitting reproducible manuscripts as preregistrations to PCI.

ProblemSolutionExample / Code
Feedback from Recommenders and reviewers indicated that it was not convenient to review .Rmd files at GitHub (in 2018 GitHub changed the appearance of these files such that you have to scroll very far to the right to read a whole paragraph on just one line)I use RStudio to export the Rmd files to HTML and upload the HTML files to my free website at GitHub Pages. This way reviewers can read the easy-to-read HTML files Example HTML and Rmd files
Recommenders and reviewers were frustrated that they had to jump around to the various pieces of the preregistration (e.g., scroll to Hypothesis 1 in the prediction section, the methods section, and the analysis section)I remedied this by making a floating table of contents for the HTML file (this option is only available for the HTML version), which makes it super easy to jump between sectionsExample floating table of contents.
Rmd CODE: for the floating table of contents goes in the Header:
    toc: true
    toc_depth: 4
      collapsed: false
      code_folding: hide 
Feedback from Recommenders and reviewers indicated that having the R code visible was distractingI added code folding to the HTML file, which makes the default to hide the R code, but if someone wants to see it, they can just click the “code” button and it will appear (and they can re-hide it again by clicking the same button, now called “hide”)Example here
Rmd CODE: see last line of code in the previous row: code_folding: hide
The DOI box at the PCI submission website was tricky because I keep all of my preregistrations in one GitHub repository, but I can’t get a DOI for each file in the repo, I can only get a DOI for the whole repo (and the easiest way to do this is to connect it with an OSF component). So entering your OSF DOI in this box will bring someone to your entire GitHub repository and then Recommenders and reviewers don’t know which file they are supposed to reviewI now enter the HTML link in the DOI box (e.g., This still meets PCI’s need to have version tracking on the file (that’s the point of the DOI) because in the HTML file and in the cover letter at the submission page, I list the version-tracked version (Rmd) of the file at GitHubRmd CODE: ensure the reader can navigate to the Rmd file:
***Click [here]( to navigate to the version-tracked reproducible manuscript (.Rmd file)***
Recommenders and reviewers felt like there were too many links to other documents throughout the preregistration (links to other preregistrations, the protocols, and a separate figure)The floating table of contents really helps with this, as does including all figures as part of the HTML file (because they actually show up in this file, whereas they don’t in the Rmd file at GitHub). I also now have an Open Materials section (listed in the floating table of contents) where I provide links to protocols all in one placeSee an example

How to submit a preregistration to PCI for peer review

Below, we share the nitty gritty details for submitting a version-tracked reproducible manuscript to PCI. But lots of these details should still be useful if you have chosen a different route for your preregistration (e.g., by placing it at OSF). Before submitting, make sure the Rmd file has all of the most recent changes and is the version you want to submit. On your last commit for this file, in the GitHub commit note write “SUBMITTED TO PCI ECOLOGY FOR PRE-STUDY PEER REVIEW”. 

  1. Go to the PCI Ecology website, click “SUBMIT A PREPRINT”, and read the instructions there, including clicking on the extra instructions for preregistrations at the bottom, then click “SUBMIT YOUR PREPRINT”
  2. Fill in the details about your submission
    • If you want a double-blind peer review, check the box “I wish an anonymous submission”
    • Title: [insert title]
    • Authors: [how you want them to appear in the citation]
    • DOI (or URL): [replace html link with the correct link for your preregistration. You can ignore the fact that this isn’t a DOI because the reviewers just want to know which file they need to review]
    • Version: [e.g., refer to the unique GitHub ID for the version you are submitting (go to the Rmd file, click History, scroll to the appropriate commit, then click on the icon of the clipboard with the arrow pointing to the left and it automatically copies the ID for you, which will look something like this: 2e0d4b74dcf49eadefcfa4711ad52cc05af824e8 ]
    • Check the box “I wish to add a small picture”. Choose a professional looking picture
    • Picture: click “Choose file” to upload the picture you chose
    • Abstract: Before the text of your abstract, you might want to alert the managing board and potential editors that this is a preregistration by adding “This is a PREREGISTRATION submitted for pre-study peer review. Our planned data collection START DATE is [insert month year], therefore it would be ideal if the peer review process could be completed before then.”
      [insert abstract]
    • Thematic fields: always check Preregistration field. Then check other fields that can additionally apply to your research
    • Keywords: birds, great-tailed grackle, [etc.]
    • Cover letter: [edit as needed, making sure to replace the Rmd and HTML URLs with the correct links, attribute photo credit to the person who took the photo, and state the planned data collection start date and that it would be ideal if we could get through the peer review process before then]
      Dear PCI Managing Board and Recommender,
      We thank you for the opportunity to submit a PREREGISTRATION for pre-study peer review (for more information about preregistrations, please see this handy article: and also the instructions at PCI Ecology: Our version-tracked version of this preregistration is available at [insert Rmd URL here and add \ so it works in markdown – see note below]. Photo credit goes to Corina Logan (CC-BY 4.0). This research is part of a senior thesis at Arizona State University and we plan to begin data analyses in late September 2019. Therefore, we would greatly appreciate if it would be possible to complete the review process before then. Please let us know if you have any questions or need further information. Many thanks for your attention!
      All our best,
      [add co-author names]
    • Check the box “I am an author of the article and I am acting on behalf of all the authors”, but first make sure that all authors have seen the most recent version and that they are happy for you to submit it
    • Check the box “This preprint has not been published…”

NOTE: in all boxes EXCEPT the DOI box, if a URL has an underscore (“_”) in it, it will show up as broken at the PCI website because they use markdown. Therefore, add a backslash (“\”) just before the underscore in the URL to make it show up properly at their website (e.g.,\_sociallearning.html).

  1. Next, you will have the opportunity to suggest Recommenders (editors) to handle your submission.
  • To figure out who you want to suggest, go to PCI Ecology > About > Recommenders and search by key words that you type in and/or by thematic field (note: “Toggle thematic fields” unchecks all boxes!). After you click “Search”, scroll down to see the results.
  • Click on people’s names to see their profiles and get more information about whether they would be appropriate (e.g., lots of people who work on biological invasions study plants, not animals). If this information isn’t in their profile, search for their name on Google Scholar and read more about their work at their websites.
  • Suggest at least 5 Recommenders, but suggesting 10 is much better (consider balancing gender and whether they are based in a country that is over- or under-privileged).
  • NOTE: if the search function at the PCI Ecology website stops letting you search for more Recommenders, just click the DONE button and it will take you back to an area where you can navigate to search for more Recommenders.

Resubmitting revisions to PCI at the pre-study peer review stage

When it comes time to RESUBMIT your work to PCI after it has been peer reviewed, draft the rebuttal at Google Docs, and include a letter to the Recommender and reviewers at the top. In the letter, make sure it includes a link to the HTML version (the reviewers like to read the HTML versions) as well as a link to the version-tracked version of the document (e.g., the .Rmd file at GitHub). Here is some example text:

“Dear [insert Recommender and reviewer names],

We greatly appreciate the time you have taken to give us such useful feedback! We are very thankful for your willingness to participate in the peer review of preregistrations, and we are happy to have the opportunity to revise and resubmit.

We revised our preregistration and associated files at, and we responded to your comments (which we numbered for clarity) below.

Note that the version-tracked version of this preregistration is in rmarkdown at GitHub: In case you want to see the history of track changes for this document at GitHub, click the previous link and then click the “History” button on the right near the top. From there, you can scroll through our comments on what was changed for each save event and if you want to see exactly what was changed, click on the text that describes the change and it will show you the text that was replaced (in red) next to the new text (in green).

We think the revised version is much improved due to your generous feedback!

All our best,

[Insert author names]

Tips for writing the rebuttal in a way that makes it easier for the editor and reviewers:

  • Number the comments and your responses so you can easily cross reference and so readers can orient themselves in the document (see example).
  • Include quotations of the text you changed directly in the rebuttal document so readers don’t have to read the rebuttal and the manuscript, they can just read the rebuttal to see what changed (see example).
  • You can upload your rebuttal as a PDF file if you wish. Or you can copy and paste the rebuttal directly into the text box at PCI, which will ultimately show up in markdown format. Therefore, if you want the rebuttal to be formatted in a particular way, use this markdown cheatsheet to make that happen. One additional formatting command that is not mentioned in the cheatsheet but might be useful: in case you want to separate text/paragraphs by more than a single line, write “ ” at the start of a new line followed by pressing enter to create a blank line after it.
  • If you ended up changing the title of your preregistration, remember to update it at the PCI website it in response to the reviewer comments before you submit your rebuttal.

Congratulations! You received In Principle Acceptance at PCI! Now what?

Now the PCI managers will ask you to format your preregistration so it shows the PCI logo, the citation for the preregistration, and the citation and link for the PCI Recommendation (see an example). The Rmd code for how to make all of this happen is in the example, but I also list it here:

<img width=”50%” src=”logoPCIecology.png”> #Logo available at

**Cite as:** Logan CJ, Rowney C, Bergeron L, Seitz B, Blaisdell A, Johnson-Ulrich Z, Folsom M, McCune K. 2019. Is behavioral flexibility manipulatable and, if so, does it improve flexibility and problem solving in a new context? ( In principle acceptance by PCI Ecology of the version on 26 Mar 2019

<img width=”5%” src=”logoOpenAccess.png”> <img width=”5%” src=”logoOpenCode.png”> <img width=”5%” src=”logoOpenPeerReview.png”>

**This preregistration has been pre-study peer reviewed and received an In Principle Recommendation by:**

Aurélie Coulon (2019) Can context changes improve behavioral flexibility? Towards a better understanding of species adaptability to environmental changes. *Peer Community in Ecology*, 100019. 10.24072/pci.ecology.100019

 – Reviewers: Maxime Dahirel and Andrea Griffin

How to cite your preregistrations that have received an in principle acceptance

The Max Planck Society counts my preregistrations that have passed pre-study peer review as research outputs and these are included in our departmental evaluations. I list these as “in press” because this is the most analogous traditional term. They just spend more time in press than an article would that was submitted to a journal after the work was finished. I cite my preregistrations that are in review and those that have passed pre-study peer review on my CV and in applications. At this stage, the Max Planck Institute for Evolutionary Anthropology librarian uploads it to their repository and lists them as “in press” and then it is automatically listed at Google Scholar.

Once they pass post-study peer review, the citation will change to:

Logan, C. J., MacPherson M, Rowney, C., Bergeron, L., Seitz, B., Blaisdell, A., Folsom M, Johnson-Ulrich, Z., & McCune, K. (2019) Is behavioral flexibility manipulatable and, if so, does it improve flexibility and problem solving in a new context? ( Peer Community In Ecology, 100019 (peer review history

Conducting your study

Make sure you keep track of any deviations from the preregistered methods and analyses and that you justify them to ensure the research remains scientifically valid. We find it handy to do this in a section called “State of the Data”, which is at the top of each preregistration (see an example). We update the same Rmd file that we originally submitted for pre-study peer review so we never have to go looking for other versions of this document.

Before resubmitting to PCI for the post-study peer review, deposit your data in a repository. We like the Knowledge Network for Biocomplexity’s data repository because it is free and has great metadata requirements that make your database findable.

We haven’t yet gotten to the post-study stage with our peer reviewed preregistrations at PCI Ecology, but Dieter has a great idea about how to turn a preregistration into a final manuscript without having to rewrite everything: above the preregistration, add the background, results, and discussion using a short journal article format. The article would have the following structure:

  1. Title and authors
  2. PCI badge with the citations to the preregistration with in principle acceptance and the review history
  3. an updated abstract which states your findings
  4. a short introduction that presents the relevant background information and states your hypotheses (but you don’t necessarily need all of the details about the various predictions and their alternatives)
  5. results where you briefly state the method used
  6. discussion
  7. the modified preregistration, starting with the state of the data section listing any deviations, the detailed hypotheses and predictions, and the methods and analysis plan, including any code

Perhaps at this point you might be wondering whether it would be better to have this one preregistration be more than one finished article. Maybe you want each hypothesis to be its own article as a stand alone piece. In this case, you could write new text for points 3-6 above (you could also change the title, and potentially some authors are only affiliated with certain hypotheses), while keeping the relevant pieces of the preregistration (point 7 above). Your preregistration will move from having an in principle acceptance to a full acceptance only after all pieces of the preregistration have passed post-study peer review.

Because the research is now complete, it is eligible to be a preprint and you could opt for putting a PDF copy of the Rmd file at a preprint server rather than having the HTML version as the easy-to-read option.

Resubmitting to PCI: post-study peer review stage

You finished your study and analyzed the results and revised your preregistration into its final draft! Now you resubmit it it to PCI as a new submission and write in the cover letter that it is the post-study submission for the preregistration that passed its pre-study peer review. Suggest the same Recommender that handled your submission before because they will be familiar with the work and can more easily assess any changes that need to be addressed. In the cover letter, highlight where in the submission you explain any potential deviations from your preregistration.

PCI economic model

General principles

  • PCI is a non-profit and non-commercial organization run by scientists for scientists. All costs are kept as low as possible.
  • PCI works on a voluntary basis: reviewers, recommenders, and administrators of the PCIs and the co-founders of the PCI project receive no money for the work they perform (aside from their salary from their regular jobs).
  • The workload is shared as much as possible to reduce the burden on any one individual.
  • Article processing time is divided between evaluation and editorial management. Most of this time is spent on scientific evaluation. The time needed for editorial management is minimal.

Cost for the overall PCI project

Human costs

  • PCI project management and promotion: The management and the promotion of the overall PCI project currently requires the equivalent of a full-time job but should decrease to about half that after 2020. This work is done by scientists on a voluntary basis and/or as part of their normal academic activities.

Functioning costs = ~ €5,500/year

  • PCI pays an annual subscription (€270/year) to Crossref to (i) use Similarity Check, a service for checking for plagiarism, and (ii) assign DOIs to recommendation texts and editorial correspondence.
  • Other costs for running and promoting the PCI project (travel to and accommodation at the annual general meeting, conferences, seminars, etc.) are currently about €5,000/year.

Cost for a given PCI (e.g. PCI Evol Biol, PCI Ecology…)

Human costs

  • Article evaluation (peer-reviews, editorial decisions): As is the case at many journals, recommenders and reviewers at each PCI are not paid by PCI. They perform the evaluation and recommendation process (between 1 and 6 days/article when accounting for the time spent by the recommender and the reviewers) on a voluntary basis and/or as part of their normal academic activities. The article evaluation costs are lower than in traditional journals because there are no editors-in-chief (EiC) and because not all submitted articles are sent out for in depth peer review.
  • PCI administration (promotion, problem solving, functioning): Administrators of each PCI receive no salary or bonus from PCI and the time they spend administering and promoting the PCI (between 10 and 100 hrs/year) is done on a voluntary basis and/or as part of their normal academic activities.
  • Article editing (technical editing, proof reading, final formatting): There is no technical editing, proof reading or final formatting of the recommended articles, and thus no corresponding cost. The authors are free to format the final version of their article as they like. This cost is thus null compared to traditional journals.
  • Article management (checking scope and conflict of interests; defining and monitoring evaluation deadlines; sending reminders to authors, recommenders and reviewers; requesting DOIs; checking for plagiarism; formatting recommendations for online publication; etc.): The editorial management of a submitted preprint takes an average of 2 hrs/article. For each PCI, the organisation of the editorial management of a submitted preprint is up to the administrators. It can be shared among administrators, among members of the managing board, or among recommenders. It can also be performed by a specific person. In any case, the time spent is on a voluntary basis and/or as part of normal academic activities. The cost of editorial management of a PCI is lower than at most traditional journals because i) not all submitted articles are sent out for in-depth peer review and ii) only limited handling is necessary after scientific approval (e.g. no handling for technical editing).
  • PCI’s website and email account development and maintenance: PCI websites work on an in-house system, with no need to pay for commercial software. Setting up, running, and updating all PCI websites and email accounts is done by a web developer and a computer scientist. The time they spend (about 300 hrs/year until 2020 and 50 hrs/year thereafter) for PCI is done on a voluntary basis and/or as part of their normal academic activities. The in-house system chosen by PCI to avoid the subscription to commercial journal management software eliminates the need to spend several thousands of euros per year per PCI.

Functioning costs = ~ €5,300/year

  • The cost for web hosting and data backup is €300/PCI/year.
  • The other costs for running and promoting a PCI (travel to and accommodation at conferences, meetings, seminars, etc.) are about €5,000/year.

Costs supported by other institutions/companies

The cost of depositing a preprint in an open archive is estimated at less than €10 per manuscript (e.g. 123,523 preprints uploaded to in 2017 on a budget of $1,019,665). This cost is covered by the open archive platforms and their sponsors.

Sponsorship for the functioning costs

PCI is a non-profit organisation governed by the French law 1901. Established in 2016, PCI receives financial support from research organisations to cover its functioning costs.

Annual sponsorship (total across all PCIs):

2016: €8,000

2017: €24,959

2018: €7,500

2019: €46,164

List of sponsors

Universities: Sorbonne Université, Université de Montpellier, UCLouvain, Aix-Marseille Université, Université de Rennes 1, Université de Strasbourg;

Research Institutes: Inrae, Museum National d’Histoire Naturelle, INEE-Cnrs, Ifremer, AgroParisTech, KU Leuven, IRD, EPHE, Ifremer.

Laboratories: UMR CBGP, UMR ISA, Logan’s Lab at Max Planck Institute of Evolutionary Anthropology

Scientific Societies: The European Society for the Study of Evolution (ESEB), Society for the Study of Evolution (SSE), Société Française d’Ecologie et d’Evolution (Sfe2), Society for Systematic Biology (SSB)


Differences with other projects

Many new publication services have recently appeared. Here is a non-exhaustive list of these services, with comments, highlighting the differences between these services and PCI.

Overlay journals (eg Discrete Analysis, Discrete Mathematics & Theoretical Computer Science) are electronic open-access journals containing peer-reviewed articles deposited in open archives, such as, and not published elsewhere. Overlay journals are diamond open-access (free for readers and free for authors). The PCI project is not designed to create journals of any kind (even overlay journals). It is, instead, simply a system for evaluating preprints and awarding them a recommendation if the recommenders handling them consider the peer-review evaluation to be sufficiently positive to merit such recommendation.

SciPost is an online scientific publication portal. Its journals (in physics) are diamond open-access (free for readers and free for authors) and use a stringent peer-reviewing procedure. Articles must be deposited in before submission. The main difference between SciPost and PCI is that SciPost is a journal publishing articles. PCIs do not publish the preprint they recommend, only the peer-review evaluation, the editorial correspondence and recommendation texts explaining the reasons why a recommender decided to recommend a preprint for a PCI. As the preprints recommended by PCIs are not published by the PCI, they can be submitted to a journal for publication even after their recommendation by a PCI.

F1000Research is a for-profit business offering an open-access and open peer-review publication platform. Regardless of the type of article, F1000Research charges an article-processing charge (APC) dependent on length (up to 1,000 words: US $150; 1,000-2,500 words: US $500; over 2,500 words: US $1,000; a surcharge of $1,000 is placed on any article exceeding 8,000 words). Articles are published first and then peer-reviewed. The main difference between this system and PCI is that, in F1000Research: i) the authors themselves identify, suggest and invite reviewers, ii) no recommenders intervene in the evaluation process, and there are therefore no editorial decisions during the evaluation process, iii) the reviewers themselves decide whether to approve the article. Wellcome Open Research and Gates Open Research operate on the same platform.

F1000Prime is a service for the recommendation of articles after their publication. Readers have to pay to read F1000 recommendations (subscriptions of US $9.95/month). F1000Prime is a for-profit business and an actor within the current system based on commercial journal publications.

Winnover is “an open-access online scholarly publishing platform that employs open post-publication peer review”. There is a small fee (US$25 per DOI) for paper archiving and no recommendations are provided. Winnover allows authors: 1) to upload an article onto their platform, and then encourages researchers, colleagues, and other scientists to make critical comments on the article over a given period of time, 2) to revise the article on the basis of the comments received and 3) to decide to freeze the article by providing it with a DOI. The end result is thus not a “recommendation” as such, but an open process of critical review without a given threshold determining whether an article may be considered scientifically “valid”. Articles can stand in Winnover with no peer-review, as in a preprint server.

The Peerage of Science operates upstream from the publication system and provides support to existing scientific journals. It is therefore an actor within the current system based on commercial journal publications. The goal is the active submission of an article to obtain constructive criticism before its submission (and the responses to the criticisms received) to a scientific journal. It is stated that “Authors may accept a live publishing offer from a subscription-only journal, or may choose to export the peer reviews to any journal of their choice.” and that “The revenues of Peerage of Science come from organizations wishing to purchase the peer review services for use in their decision-making, such as publishers, funding organizations, and universities.” Again, this is a very different model from the PCI project.

biOverlay is similar to an overlay journal for the natural sciences, except that the authors do not submit their article to biOverlay. By contrast to PCI, authors do not submit their own preprints to biOverlay for evaluation. Thus, the authors do not necessarily known that their papers are selected by biOverlay and sent out for review. Associate editors choose the articles they wish to evaluate.

PreLights is a community platform for highlighting and commenting on preprints. It is a service run by the biological community and supported by The Company of Biologists, a not-for-profit publishing organization. By contrast to PCI, authors do not submit their own preprints to PreLights for evaluation. Early-career researchers select preprints and write digests about them.

PREreview is a community and platform for the collaborative writing of preprint reviews. It gathers journal clubs providing feedback to authors but it also publishes reviews (of preprints) written by any researcher with an Orcid iD. By contrast to PCI, the authors do not submit their own preprints to PREreview for evaluation. is a non-profit organization providing a free online plugin for the annotation, on the web, of almost any kind of document (e.g. blogs, scientific articles, e-books) in very different formats (e.g. PDF, Html.). has recently begun collaborating with to allow the layering of discussions over preprints. This organization offers the possibility of creating journal clubs with a mode for annotations publicly visible to all users. By contrast to PCI, the authors do not submit their own preprints to for evaluation.

PubPeer, supported by the non-profit PubPeer Foundation, is an online platform originally devoted to post-publication comments. However, PubPeer has recently been opened up to preprint reviews. By contrast to PCI, preprint reviews published by PubPeer are not requested by the authors and are not used to help recommenders make editorial decisions concerning preprints.

Plaudit is a tool allowing researchers to publicly endorse research publications – including preprints – they find valuable. Plaudit identifies endorsements via researcher ORCID iD, and the articles they endorse through the DOIs of those article. A browser extension is needed. Plaudit is open source, community-driven, free to use and not for profit.

Review Commons is a platform that peer-reviews research papers before submission to a journal. It provides authors with a Refereed Preprint, which includes the authors’ manuscript, reports from a single round of peer review and the authors’ response. The refereed preprint is then sent to bioRxiv and affiliate journals. The goal is to facilitate the evaluation by journals afterwards. Review Commons is a partnership between EMBO and ASAPbio and is initially free.

Peeriodicals is a platform gathering virtual journals whose editors in chief are free to select the manuscripts they want to highlight. There is no mandatory formal submission by the authors (although authors can suggest their papers), no mandatory formal peer-review (although there can be some) and no formal editorial decision (although there can be some). Peeriodicals is free for readers, authors and editors.

eLife is a non-profit organization running an open-access journal. It was originally free for readers and authors, but publication fees have since been introduced. Currently, “A fee of $2,500 is collected for published papers” – see

PeerJ is an open-access peer-reviewed scientific megajournal covering research in biological and medical sciences. Authors have to pay to publish. They either pay US $1,095 to publish a paper, or each author pays a one-off fee of US $399 (or more) allowing them to publish one (or more) paper/year in the journal. Additional fees may be required for very long manuscripts. See

A very interesting link to ReimagineReview, a registry of platforms and experiments innovating around peer review of scientific outputs

Steps in the creation of a new PCI

1) Choose one or two colleagues to set up and manage the PCI

Having at least two administrators improves the monitoring and sustainability of the project.

Setting up the PCI involves defining the subject it will cover, establishing a first group of recommenders, and submitting the project for validation by the PCI association.

Administration of the PCI involves appointing more recommenders, encouraging preprint submissions and ensuring that the evaluation and recommendation processes are managed correctly.

2) Define the subject

The subject should be defined carefully. It can be wide or narrow. A wider subject may attract more papers, resulting in a more selective PCI. However, if the subject is too wide, the various members of the community may fail to identify themselves as belonging to the PCI concerned. Not everyone in the field will join the PCI, so, if it is too narrow, the community it attracts may be unsustainable. Statistics on preprint use in each field may be helpful (e.g.

Try to make sure that the new PCI does not overlap too much with other existing or forthcoming PCIs (contact us –– for verification).

3) Establish a first group of recommenders

You will need to establish a group of 20 to 50 recommenders. This initial group of recommenders must be high-quality scientists, recognized in the field, with as many international members of possible, members of learned societies and of editorial boards of renowned journals in the field, and winners of prizes or competitions, for example. The success of a PCI depends on the inclusion of both high-profile senior scientists and dynamic young researchers. Gender parity is also desirable.

4) Submit your proposal to the PCI association for approval

The creation of a new PCI must be approved by the non-profit “Peer Community in” organization. The members of the managing boards of the existing PCIs form the board of this organization. Send your proposal (1 to 2 pages, indicating the motivation behind the creation of your PCI, its subject and the names of the administrators and the first group of recommenders) to for approval by the PCI association.

5) Recruit more recommenders

Once the PCI has been validated, you should use the first group of recommenders to appoint more recommenders. This process should take place after the creation of the website for the PCI, protected by a password to prevent general public access, providing a means of registering new recommenders.

6) Set up a managing board

Once a certain predefined number of recommenders have been recruited (e.g. 50, 100 or 200), a managing board should be defined. The members of this managing board are responsible for validating editorial decisions concerning the preprints submitted, approving the nomination of new recommenders and dealing with potential problems arising between authors and the recommenders responsible for evaluating and/or recommending preprints (see the FAQ). The managing board must have five to 15 recommenders and must include the administrators.

7) Open publicly the PCI: receive submissions and manage evaluations

The management of a submitted preprint – from its submission to its rejection or recommendation, excluding the evaluation by the recommender and the reviewers – takes about two hours, on average. The administrators decide for themselves how best to organize preprint management (shared between administrators, managing board members, recommenders, or other people).


We can help you by providing extensive documentary resources about PCI, including short movies, and templates of messages for the invitation of co-founders and recommenders. If the creation of your PCI is accepted, we will provide you with a fully functional website (front and back office) and help you to manage the first few (about 20, probably) preprints submitted.

Send any questions to contact[ at ]peercommubityin[ dot ]org

In summary

PCI promotes scientific reproducibility

PCI wants to promote scientific reproducibility to improve the overall robustness and integrity of our scientific conclusions. To this aim, PCI has set up 3 mandatory rules and made 2 suggestions to authors:

Mandatory rules:

Preprints recommended by PCI must provide the readers:

Raw data by making them available either in the text or through an open data repository such as Zenodo, Dryad or some other institutional repositories (see Directory of Open Access Repositories). Data must be reusable, thus metadata and accompanying text must carefully describe the data.

Details on the quantitative analyses (e.g., data treatment and statistical scripts in R, bioinformatic pipeline scripts, etc.) and details concerning simulations (scripts, codes) in the text, as appendices, or through an open data repository, such as Zenodo, Dryad or some other institutional repositories (see Directory of Open Access Repositories). The scripts or codes must be carefully described such that another researcher can run them.

Details on experimental procedures. These details must be givenin the text or as appendices at the end of the article.

Suggestions to authors:

-PCI encourages authors to use preregistrations: Authors may post their research questions and analysis plan to an independent registry before observing the research outcomes, and thus before writing and submitting their article. This provides a way for them to clarify their hypotheses, avoid confusing “postdictions” and predictions, and carefully plan appropriate statistical treatment of the data (eg see 10.1073/pnas.1708274114).

-PCI also welcomes submissions of preregistrations. Authors can submit their preregistrations to a PCI before beginning their study, and thus before acquiring the data. Preregistrations are then evaluated by recommenders based on independent reviews, in exactly the same way as preprint articles. Preregistrations can thus be rejected or undergo revisions, improving the quality and robustness of the experimental design. When a preregistration is accepted, the subsequent article submitted to the corresponding PCI would be recommended provided the study has been conducted as described in the preregistration (or with any modifications clearly justified). In this way, an article cannot be rejected due to the outcome of the study only. Details on preregistration submissions can be found for example here.

The PCI project is supported by


PCI and Impact Factors

PCI does not have an impact factor (IF).


Simply because PCI is not a journal. It does not publish research articles. It publishes only evaluations of research articles and texts recommending research articles.

Preprints recommended by a PCI can still be characterized by citation metrics, like any other preprint. Google Scholar, for example, provides the number of citations of preprints.


If a preprint is eventually published in a classic journal, the number of citations of the preprint can then be added to the number of citations of the published article. Google Scholar performs this calculation automatically. Consequently, the citations of the preprint are also taken into consideration in the various metrics calculated by Google Scholar.

An IF-like metric can then be calculated for PCI based on the number of citations of articles recommended by the PCI.

Dora3PCI has signed the San Francisco Declaration on Research Assessment (DORA) together with more than 500 organisations (> 300 universities, >30 national academies of sciences), and more than 12000 individuals:

San Francisco Declaration on Research Assessment

There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment. We invite interested parties across all scientific disciplines to indicate their support by adding their names to this Declaration.

The outputs from scientific research are many and varied, including: research articles reporting new knowledge, data, reagents, and software; intellectual property; and highly trained young scientists. Funding agencies, institutions that employ scientists, and scientists themselves, all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.

The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters*, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7]. Below we make a number of recommendations for improving the way in which the quality of research output is evaluated. Outputs other than research articles will grow in importance in assessing research effectiveness in the future, but the peer-reviewed research paper will remain a central research output that informs research assessment. Our recommendations therefore focus primarily on practices relating to research articles published in peer-reviewed journals but can and should be extended by recognizing additional products, such as datasets, as important research outputs. These recommendations are aimed at funding agencies, academic institutions, journals, organizations that supply metrics, and individual researchers.

A number of themes run through these recommendations:

  • the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
  • the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
  • the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

We recognize that many funding agencies, institutions, publishers, and researchers are already encouraging improved practices in research assessment. Such steps are beginning to increase the momentum toward more sophisticated and meaningful approaches to research evaluation that can now be built upon and adopted by all of the key constituencies involved.

The signatories of the San Francisco Declaration on Research Assessment support the adoption of the following practices in research assessment.

General Recommendation

1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

For funding agencies

2. Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

3. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For institutions

4. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

5. For the purposes of research assessment, consider the value and impact of all
research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For publishers

6. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor [8], SCImago [9], h-index, editorial and publication times, etc.) that provide a richer view of journal performance.

7. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.

8. Encourage responsible authorship practices and the provision of information about the specific contributions of each author.

9. Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication [10].

10. Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.

For organizations that supply metrics

11. Be open and transparent by providing data and methods used to calculate all metrics.

12. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.

13. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.

14. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared.

For researchers

15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.

16. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.

17. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs [11].

18. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.


  1. Adler, R., Ewing, J., and Taylor, P. (2008) Citation statistics. A report from the International Mathematical Union.
  2. Seglen, P.O. (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314, 498–502.
  3. Editorial (2005). Not so deep impact. Nature 435, 1003–1004.
  4. Vanclay, J.K. (2012) Impact Factor: Outdated artefact or stepping-stone to journal certification. Scientometric 92, 211–238.
  5. The PLoS Medicine Editors (2006). The impact factor game. PLoS Med 3(6): e291 doi:10.1371/journal.pmed.0030291.
  6. Rossner, M., Van Epps, H., Hill, E. (2007). Show me the data. J. Cell Biol. 179, 1091–1092.
  7. Rossner M., Van Epps H., and Hill E. (2008). Irreproducible results: A response to Thomson Scientific. J. Cell Biol. 180, 254–255.

*The Journal Impact Factor is now published by Clarivate Analytics.

The PCI project is supported by