New Models for Large Prospective Studies: Is There a Risk of Throwing Out the Baby With the Bathwater?
American Journal of Epidemiology, Vol: 177, Issue: 4, Page: 285-289
2013
- 2Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage2
- Abstract Views2
Article Description
Manolio et al. (Am J Epidemiol. 2012;175:859-866) proposed that large cohort studies adopt novel models using "temporary assessment centers" to enroll up to a million participants to answer research questions about rare diseases and "harmonize" clinical endpoints collected from administrative records. Extreme selection bias, we are told, will not harm internal validity, and "process expertise to maximize efficiency of high-throughput operations is as important as scientific rigor" (p. 861). In this article, we describe serious deficiencies in this model as applied to the United States. Key points include: 1) the need for more, not less, specification of disease endpoints; 2) the limited utility of data collected from existing administrative and clinical databases; and 3) the value of university-based centers in providing scientific expertise and achieving high recruitment and retention rates through community and healthcare provider engagement. Careful definition of sampling frames and high response rates are crucial to avoid bias and ensure inclusion of important subpopulations, especially the medically underserved. Prospective hypotheses are essential to refine study design, determine sample size, develop pertinent data collection protocols, and achieve alliances with participants and communities. It is premature to reject the strengths of large national cohort studies in favor of a new model for which evidence of efficiency is insufficient.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know