PCI economic model

General principles

  • PCI is a non-profit and non-commercial organization run by scientists for scientists. All costs are kept as low as possible.
  • PCI works on a voluntary basis: reviewers, recommenders, and administrators of the PCIs and the co-founders of the PCI project receive no money for the work they perform (aside from their salary from their regular jobs).
  • The workload is shared as much as possible to reduce the burden on any one individual.
  • Article processing time is divided between evaluation and editorial management. Most of this time is spent on scientific evaluation. The time needed for editorial management is minimal.

Cost for the overall PCI project

Human costs

  • PCI project management and promotion: The management and the promotion of the overall PCI project currently requires the equivalent of a full-time job but should decrease to about half that after 2020. This work is done by scientists on a voluntary basis and/or as part of their normal academic activities.

Functioning costs = ~ €5,500/year

  • PCI pays an annual subscription (€270/year) to Crossref to (i) use Similarity Check, a service for checking for plagiarism, and (ii) assign DOIs to recommendation texts and editorial correspondence.
  • Other costs for running and promoting the PCI project (travel to and accommodation at the annual general meeting, conferences, seminars, etc.) are currently about €5,000/year.

Cost for a given PCI (e.g. PCI Evol Biol, PCI Ecology…)

Human costs

  • Article evaluation (peer-reviews, editorial decisions): As is the case at many journals, recommenders and reviewers at each PCI are not paid by PCI. They perform the evaluation and recommendation process (between 1 and 6 days/article when accounting for the time spent by the recommender and the reviewers) on a voluntary basis and/or as part of their normal academic activities. The article evaluation costs are lower than in traditional journals because there are no editors-in-chief (EiC) and because not all submitted articles are sent out for in depth peer review.
  • PCI administration (promotion, problem solving, functioning): Administrators of each PCI receive no salary or bonus from PCI and the time they spend administering and promoting the PCI (between 10 and 100 hrs/year) is done on a voluntary basis and/or as part of their normal academic activities.
  • Article editing (technical editing, proof reading, final formatting): There is no technical editing, proof reading or final formatting of the recommended articles, and thus no corresponding cost. The authors are free to format the final version of their article as they like. This cost is thus null compared to traditional journals.
  • Article management (checking scope and conflict of interests; defining and monitoring evaluation deadlines; sending reminders to authors, recommenders and reviewers; requesting DOIs; checking for plagiarism; formatting recommendations for online publication; etc.): The editorial management of a submitted preprint takes an average of 2 hrs/article. For each PCI, the organisation of the editorial management of a submitted preprint is up to the administrators. It can be shared among administrators, among members of the managing board, or among recommenders. It can also be performed by a specific person. In any case, the time spent is on a voluntary basis and/or as part of normal academic activities. The cost of editorial management of a PCI is lower than at most traditional journals because i) not all submitted articles are sent out for in-depth peer review and ii) only limited handling is necessary after scientific approval (e.g. no handling for technical editing).
  • PCI’s website and email account development and maintenance: PCI websites work on an in-house system, with no need to pay for commercial software. Setting up, running, and updating all PCI websites and email accounts is done by a web developer and a computer scientist. The time they spend (about 300 hrs/year until 2020 and 50 hrs/year thereafter) for PCI is done on a voluntary basis and/or as part of their normal academic activities. The in-house system chosen by PCI to avoid the subscription to commercial journal management software eliminates the need to spend several thousands of euros per year per PCI.

Functioning costs = ~ €5,300/year

  • The cost for web hosting and data backup is €300/PCI/year.
  • The other costs for running and promoting a PCI (travel to and accommodation at conferences, meetings, seminars, etc.) are about €5,000/year.

Costs supported by other institutions/companies

The cost of depositing a preprint in an open archive is estimated at less than €10 per manuscript (e.g. 123,523 preprints uploaded to arXiv.org in 2017 on a budget of $1,019,665). This cost is covered by the open archive platforms and their sponsors.

Sponsorship for the functioning costs

PCI is a non-profit organisation governed by the French law 1901. Established in 2016, PCI receives financial support from research organisations to cover its functioning costs.

Annual sponsorship (total across all PCIs):

2016: €8,000

2017: €24,959

2018: €15,500

2019: €19,387

List of sponsors

Universities: Sorbonne Université, Université de Montpellier, UCLouvain, Aix-Marseille Université, Université de Rennes 1, Université de Strasbourg;

Research Institutes: Inra, Museum National d’Histoire Naturelle, INEE-Cnrs, Ifremer, AgroParisTech.

Laboratories: UMR CBGP, UMR ISA

Scientific Societies: The European Society for the Study of Evolution (ESEB), Society for the Study of Evolution (SSE), Société Française d’Ecologie et d’Evolution (Sfe2), Society for Systematic Biology (SSB)

LabEx: TULIP, CeMEB, CEBA, ECOFECT, BASC

Differences with other projects

Many new publication services have recently appeared. Here is a non-exhaustive list of these services, with comments, highlighting the differences between these services and PCI.

Overlay journals (eg Discrete Analysis, Discrete Mathematics & Theoretical Computer Science) are electronic open-access journals containing peer-reviewed articles deposited in open archives, such as arXiv.org, and not published elsewhere. Overlay journals are diamond open-access (free for readers and free for authors). The PCI project is not designed to create journals of any kind (even overlay journals). It is, instead, simply a system for evaluating preprints and awarding them a recommendation if the recommenders handling them consider the peer-review evaluation to be sufficiently positive to merit such recommendation.

SciPost is an online scientific publication portal. Its journals (in physics) are diamond open-access (free for readers and free for authors) and use a stringent peer-reviewing procedure. Articles must be deposited in arXiv.org before submission. The main difference between SciPost and PCI is that SciPost is a journal publishing articles. PCIs do not publish the preprint they recommend, only the peer-review evaluation, the editorial correspondence and recommendation texts explaining the reasons why a recommender decided to recommend a preprint for a PCI. As the preprints recommended by PCIs are not published by the PCI, they can be submitted to a journal for publication even after their recommendation by a PCI.

F1000Research is a for-profit business offering an open-access and open peer-review publication platform. Regardless of the type of article, F1000Research charges an article-processing charge (APC) dependent on length (up to 1,000 words: US $150; 1,000-2,500 words: US $500; over 2,500 words: US $1,000; a surcharge of $1,000 is placed on any article exceeding 8,000 words). Articles are published first and then peer-reviewed. The main difference between this system and PCI is that, in F1000Research: i) the authors themselves identify, suggest and invite reviewers, ii) no recommenders intervene in the evaluation process, and there are therefore no editorial decisions during the evaluation process, iii) the reviewers themselves decide whether to approve the article. Wellcome Open Research and Gates Open Research operate on the same platform.

F1000Prime is a service for the recommendation of articles after their publication. Readers have to pay to read F1000 recommendations (subscriptions of US $9.95/month). F1000Prime is a for-profit business and an actor within the current system based on commercial journal publications.

Winnover is “an open-access online scholarly publishing platform that employs open post-publication peer review”. There is a small fee (US$25 per DOI) for paper archiving and no recommendations are provided. Winnover allows authors: 1) to upload an article onto their platform, and then encourages researchers, colleagues, and other scientists to make critical comments on the article over a given period of time, 2) to revise the article on the basis of the comments received and 3) to decide to freeze the article by providing it with a DOI. The end result is thus not a “recommendation” as such, but an open process of critical review without a given threshold determining whether an article may be considered scientifically “valid”. Articles can stand in Winnover with no peer-review, as in a preprint server.

The Peerage of Science operates upstream from the publication system and provides support to existing scientific journals. It is therefore an actor within the current system based on commercial journal publications. The goal is the active submission of an article to obtain constructive criticism before its submission (and the responses to the criticisms received) to a scientific journal. It is stated that “Authors may accept a live publishing offer from a subscription-only journal, or may choose to export the peer reviews to any journal of their choice.” and that “The revenues of Peerage of Science come from organizations wishing to purchase the peer review services for use in their decision-making, such as publishers, funding organizations, and universities.” Again, this is a very different model from the PCI project.

biOverlay is similar to an overlay journal for the natural sciences, except that the authors do not submit their article to biOverlay. By contrast to PCI, authors do not submit their own preprints to biOverlay for evaluation. Thus, the authors do not necessarily known that their papers are selected by biOverlay and sent out for review. Associate editors choose the articles they wish to evaluate.

PreLights is a community platform for highlighting and commenting on preprints. It is a service run by the biological community and supported by The Company of Biologists, a not-for-profit publishing organization. By contrast to PCI, authors do not submit their own preprints to PreLights for evaluation. Early-career researchers select preprints and write digests about them.

PREreview is a community and platform for the collaborative writing of preprint reviews. It is a group of journal clubs providing feedback to authors. By contrast to PCI, the authors do not submit their own preprints to PREreview for evaluation.

Hypothes.is is a non-profit organization providing a free online plugin for the annotation, on the web, of almost any kind of document (e.g. blogs, scientific articles, e-books) in very different formats (e.g. PDF, Html.). Hypothes.is has recently begun collaborating with bioRxiv.org to allow the layering of discussions over preprints. This organization offers the possibility of creating journal clubs with a mode for annotations publicly visible to all hypothes.is users. By contrast to PCI, the authors do not submit their own preprints to Hypothes.is for evaluation.

PubPeer, supported by the non-profit PubPeer Foundation, is an online platform originally devoted to post-publication comments. However, PubPeer has recently been opened up to preprint reviews. By contrast to PCI, preprint reviews published by PubPeer are not requested by the authors and are not used to help recommenders make editorial decisions concerning preprints.

eLife is a non-profit organization running an open-access journal. It was originally free for readers and authors, but publication fees have since been introduced. Currently, “A fee of $2,500 is collected for published papers” – see https://submit.elifesciences.org/html/elife_author_instructions.html#.

PeerJ is an open-access peer-reviewed scientific megajournal covering research in biological and medical sciences. Authors have to pay to publish. They either pay US $1,095 to publish a paper, or each author pays a one-off fee of US $399 (or more) allowing them to publish one (or more) paper/year in the journal. Additional fees may be required for very long manuscripts. See https://peerj.com/pricing/.

A very interesting link to ReimagineReview, a registry of platforms and experiments innovating around peer review of scientific outputs

Steps in the creation of a new PCI

1) Choose one or two colleagues to set up and manage the PCI

Having at least two administrators improves the monitoring and sustainability of the project.

Setting up the PCI involves defining the subject it will cover, establishing a first group of recommenders, and submitting the project for validation by the PCI association.

Administration of the PCI involves appointing more recommenders, encouraging preprint submissions and ensuring that the evaluation and recommendation processes are managed correctly.

2) Define the subject

The subject should be defined carefully. It can be wide or narrow. A wider subject may attract more papers, resulting in a more selective PCI. However, if the subject is too wide, the various members of the community may fail to identify themselves as belonging to the PCI concerned. Not everyone in the field will join the PCI, so, if it is too narrow, the community it attracts may be unsustainable. Statistics on preprint use in each field may be helpful (e.g. https://www.rxivist.org/stats).

Try to make sure that the new PCI does not overlap too much with other existing or forthcoming PCIs (contact us – contact@peercommunityin.org– for verification).

3) Establish a first group of recommenders

You will need to establish a group of 20 to 50 recommenders. This initial group of recommenders must be high-quality scientists, recognized in the field, with as many international members of possible, members of learned societies and of editorial boards of renowned journals in the field, and winners of prizes or competitions, for example. The success of a PCI depends on the inclusion of both high-profile senior scientists and dynamic young researchers. Gender parity is also desirable.

4) Submit your proposal to the PCI association for approval

The creation of a new PCI must be approved by the non-profit “Peer Community in” organization. The members of the managing boards of the existing PCIs form the board of this organization. Send your proposal (1 to 2 pages, indicating the motivation behind the creation of your PCI, its subject and the names of the administrators and the first group of recommenders) to contact@peercommunityin.org for approval by the PCI association.

5) Recruit more recommenders

Once the PCI has been validated, you should use the first group of recommenders to appoint more recommenders. This process should take place after the creation of the website for the PCI, protected by a password to prevent general public access, providing a means of registering new recommenders.

6) Set up a managing board

Once a certain predefined number of recommenders have been recruited (e.g. 50, 100 or 200), a managing board should be defined. The members of this managing board are responsible for validating editorial decisions concerning the preprints submitted, approving the nomination of new recommenders and dealing with potential problems arising between authors and the recommenders responsible for evaluating and/or recommending preprints (see the FAQ). The managing board must have five to 15 recommenders and must include the administrators.

7) Open publicly the PCI: receive submissions and manage evaluations

The management of a submitted preprint – from its submission to its rejection or recommendation, excluding the evaluation by the recommender and the reviewers – takes about two hours, on average. The administrators decide for themselves how best to organize preprint management (shared between administrators, managing board members, recommenders, or other people).

Support

We can help you by providing extensive documentary resources about PCI, including short movies, and templates of messages for the invitation of co-founders and recommenders. If the creation of your PCI is accepted, we will provide you with a fully functional website (front and back office) and help you to manage the first few (about 20, probably) preprints submitted.

Send any questions to contact[ at ]peercommubityin[ dot ]org

In summary

PCI promotes scientific reproducibility

PCI wants to promote scientific reproducibility to improve the overall robustness and integrity of our scientific conclusions. To this aim, PCI has set up 3 mandatory rules and made 2 suggestions to authors:

Mandatory rules:

Preprints recommended by PCI must provide the readers:

Raw data by making them available either in the text or through an open data repository such as Zenodo, Dryad or some other institutional repositories (see Directory of Open Access Repositories). Data must be reusable, thus metadata and accompanying text must carefully describe the data.

Details on the quantitative analyses (e.g., data treatment and statistical scripts in R, bioinformatic pipeline scripts, etc.) and details concerning simulations (scripts, codes) in the text, as appendices, or through an open data repository, such as Zenodo, Dryad or some other institutional repositories (see Directory of Open Access Repositories). The scripts or codes must be carefully described such that another researcher can run them.

Details on experimental procedures. These details must be givenin the text or as appendices at the end of the article.

Suggestions to authors:

-PCI encourages authors to use preregistrations: Authors may post their research questions and analysis plan to an independent registry before observing the research outcomes, and thus before writing and submitting their article. This provides a way for them to clarify their hypotheses, avoid confusing “postdictions” and predictions, and carefully plan appropriate statistical treatment of the data (eg see 10.1073/pnas.1708274114).

-PCI also welcomes submissions of preregistrations. Authors can submit their preregistrations to a PCI before beginning their study, and thus before acquiring the data. Preregistrations are then evaluated by recommenders based on independent reviews, in exactly the same way as preprint articles. Preregistrations can thus be rejected or undergo revisions, improving the quality and robustness of the experimental design. When a preregistration is accepted, the subsequent article submitted to the corresponding PCI would be recommended provided the study has been conducted as described in the preregistration (or with any modifications clearly justified). In this way, an article cannot be rejected due to the outcome of the study only. Details on preregistration submissions can be found for example here.

The PCI project is supported by

bandeau_sponsors2

PCI and Impact Factors

PCI does not have an impact factor (IF).

Why?

Simply because PCI is not a journal. It does not publish research articles. It publishes only evaluations of research articles and texts recommending research articles.

Preprints recommended by a PCI can still be characterized by citation metrics, like any other preprint. Google Scholar, for example, provides the number of citations of preprints.

impact_factor_prayer

If a preprint is eventually published in a classic journal, the number of citations of the preprint can then be added to the number of citations of the published article. Google Scholar performs this calculation automatically. Consequently, the citations of the preprint are also taken into consideration in the various metrics calculated by Google Scholar.

An IF-like metric can then be calculated for PCI based on the number of citations of articles recommended by the PCI.

Dora3PCI has signed the San Francisco Declaration on Research Assessment (DORA) together with more than 500 organisations (> 300 universities, >30 national academies of sciences), and more than 12000 individuals:

San Francisco Declaration on Research Assessment

There is a pressing need to improve the ways in which the output of scientific research is evaluated by funding agencies, academic institutions, and other parties.To address this issue, a group of editors and publishers of scholarly journals met during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco, CA, on December 16, 2012. The group developed a set of recommendations, referred to as the San Francisco Declaration on Research Assessment. We invite interested parties across all scientific disciplines to indicate their support by adding their names to this Declaration.

The outputs from scientific research are many and varied, including: research articles reporting new knowledge, data, reagents, and software; intellectual property; and highly trained young scientists. Funding agencies, institutions that employ scientists, and scientists themselves, all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.

The Journal Impact Factor is frequently used as the primary parameter with which to compare the scientific output of individuals and institutions. The Journal Impact Factor, as calculated by Thomson Reuters*, was originally created as a tool to help librarians identify journals to purchase, not as a measure of the scientific quality of research in an article. With that in mind, it is critical to understand that the Journal Impact Factor has a number of well-documented deficiencies as a tool for research assessment. These limitations include: A) citation distributions within journals are highly skewed [1–3]; B) the properties of the Journal Impact Factor are field-specific: it is a composite of multiple, highly diverse article types, including primary research papers and reviews [1, 4]; C) Journal Impact Factors can be manipulated (or “gamed”) by editorial policy [5]; and D) data used to calculate the Journal Impact Factors are neither transparent nor openly available to the public [4, 6, 7]. Below we make a number of recommendations for improving the way in which the quality of research output is evaluated. Outputs other than research articles will grow in importance in assessing research effectiveness in the future, but the peer-reviewed research paper will remain a central research output that informs research assessment. Our recommendations therefore focus primarily on practices relating to research articles published in peer-reviewed journals but can and should be extended by recognizing additional products, such as datasets, as important research outputs. These recommendations are aimed at funding agencies, academic institutions, journals, organizations that supply metrics, and individual researchers.

A number of themes run through these recommendations:

  • the need to eliminate the use of journal-based metrics, such as Journal Impact Factors, in funding, appointment, and promotion considerations;
  • the need to assess research on its own merits rather than on the basis of the journal in which the research is published; and
  • the need to capitalize on the opportunities provided by online publication (such as relaxing unnecessary limits on the number of words, figures, and references in articles, and exploring new indicators of significance and impact).

We recognize that many funding agencies, institutions, publishers, and researchers are already encouraging improved practices in research assessment. Such steps are beginning to increase the momentum toward more sophisticated and meaningful approaches to research evaluation that can now be built upon and adopted by all of the key constituencies involved.

The signatories of the San Francisco Declaration on Research Assessment support the adoption of the following practices in research assessment.

General Recommendation

1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

For funding agencies

2. Be explicit about the criteria used in evaluating the scientific productivity of grant applicants and clearly highlight, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

3. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For institutions

4. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the scientific content of a paper is much more important than publication metrics or the identity of the journal in which it was published.

5. For the purposes of research assessment, consider the value and impact of all
research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice.

For publishers

6. Greatly reduce emphasis on the journal impact factor as a promotional tool, ideally by ceasing to promote the impact factor or by presenting the metric in the context of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor [8], SCImago [9], h-index, editorial and publication times, etc.) that provide a richer view of journal performance.

7. Make available a range of article-level metrics to encourage a shift toward assessment based on the scientific content of an article rather than publication metrics of the journal in which it was published.

8. Encourage responsible authorship practices and the provision of information about the specific contributions of each author.

9. Whether a journal is open-access or subscription-based, remove all reuse limitations on reference lists in research articles and make them available under the Creative Commons Public Domain Dedication [10].

10. Remove or reduce the constraints on the number of references in research articles, and, where appropriate, mandate the citation of primary literature in favor of reviews in order to give credit to the group(s) who first reported a finding.

For organizations that supply metrics

11. Be open and transparent by providing data and methods used to calculate all metrics.

12. Provide the data under a licence that allows unrestricted reuse, and provide computational access to data, where possible.

13. Be clear that inappropriate manipulation of metrics will not be tolerated; be explicit about what constitutes inappropriate manipulation and what measures will be taken to combat this.

14. Account for the variation in article types (e.g., reviews versus research articles), and in different subject areas when metrics are used, aggregated, or compared.

For researchers

15. When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.

16. Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.

17. Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs [11].

18. Challenge research assessment practices that rely inappropriately on Journal Impact Factors and promote and teach best practice that focuses on the value and influence of specific research outputs.

References

  1. Adler, R., Ewing, J., and Taylor, P. (2008) Citation statistics. A report from the International Mathematical Union.
  2. Seglen, P.O. (1997) Why the impact factor of journals should not be used for evaluating research. BMJ 314, 498–502.
  3. Editorial (2005). Not so deep impact. Nature 435, 1003–1004.
  4. Vanclay, J.K. (2012) Impact Factor: Outdated artefact or stepping-stone to journal certification. Scientometric 92, 211–238.
  5. The PLoS Medicine Editors (2006). The impact factor game. PLoS Med 3(6): e291 doi:10.1371/journal.pmed.0030291.
  6. Rossner, M., Van Epps, H., Hill, E. (2007). Show me the data. J. Cell Biol. 179, 1091–1092.
  7. Rossner M., Van Epps H., and Hill E. (2008). Irreproducible results: A response to Thomson Scientific. J. Cell Biol. 180, 254–255.
  8. http://www.eigenfactor.org/
  9. http://www.scimagojr.com/
  10. http://opencitations.wordpress.com/2013/01/03/open-letter-to-publishers
  11. http://altmetrics.org/tools/

*The Journal Impact Factor is now published by Clarivate Analytics.

The PCI project is supported by

bandeau_sponsors2