Why the referees’ reports I receive as an editor are so much better than the reports I receive as an author?
Scientometrics, ISSN: 1588-2861, Vol: 106, Issue: 3, Page: 967-986
2016
- 7Citations
- 32Captures
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Authors tend to attribute manuscript acceptance to their own ability to write quality papers and simultaneously to blame rejections on negative bias in peer review, displaying a self-serving attributional bias. Here, a formal model provides rational explanations for this self-serving bias in a Bayesian framework. For the high-ability authors in a very active scientific field, the model predictions are: (1) Bayesian-rational authors are relatively overconfident about their likelihood of manuscript acceptance, whereas authors who play the role of referees have less confidence in manuscripts of other authors; (2) if the final disposition of his or her manuscript is acceptance, the Bayesian-rational author almost surely attributes this decision more to his or her own ability; (3) when the final disposition is rejection, the Bayesian-rational author almost surely attributes this decision more to negative bias in peer review; (4) some rational authors do not learn as much from the critical reviewers’ comments in case of rejection as they should from the journal editor’s perspective. In order to validate the model predictions, we present results from a survey of 156 authors. The participants in the experimental study are authors of articles published in Scientometrics from 2000 to 2012.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=84958678415&origin=inward; http://dx.doi.org/10.1007/s11192-015-1827-8; http://link.springer.com/10.1007/s11192-015-1827-8; http://link.springer.com/content/pdf/10.1007/s11192-015-1827-8; http://link.springer.com/content/pdf/10.1007/s11192-015-1827-8.pdf; http://link.springer.com/article/10.1007/s11192-015-1827-8/fulltext.html; https://dx.doi.org/10.1007/s11192-015-1827-8; https://link.springer.com/article/10.1007/s11192-015-1827-8
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know