How prior and p-value heuristics are used when interpreting data
bioRxiv, ISSN: 2692-8205
2023
- 1Mentions
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Mentions1
- News Mentions1
- News1
Most Recent News
How prior and p-value heuristics are used when interpreting data
2023 SEP 18 (NewsRx) -- By a News Reporter-Staff News Editor at Education Daily Report -- According to news reporting based on a preprint abstract,
Article Description
Scientific conclusions are based on the ways that researchers interpret data, a process that is shaped by psychological and cultural factors. When researchers use shortcuts known as heuristics to interpret data, it can sometimes lead to errors. To test the use of heuristics, we surveyed 623 researchers in biology and asked them to interpret scatterplots that showed ambiguous relationships, altering only the labels on the graphs. Our manipulations tested the use of two heuristics based on major statistical frameworks: (1) the strong prior heuristic, where a relationship is viewed as stronger if it is expected a priori, following Bayesian statistics, and (2) the p-value heuristic, where a relationship is viewed as stronger if it is associated with a small p-value, following null hypothesis statistical testing. Our results show that both the strong prior and p-value heuristics are common. Surprisingly, the strong prior heuristic was more prevalent among inexperienced researchers, whereas its effect was diminished among the most experienced biologists in our survey. By contrast, we find that p-values cause researchers at all levels to report that an ambiguous graph shows a strong result. Together, these results suggest that experience in the sciences may diminish a researcher’s Bayesian intuitions, while reinforcing the use of p-values as a shortcut for effect size. Reform to data science training in STEM could help reduce researchers’ reliance on error-prone heuristics.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know