An Empirical Comparison of Meta- and Mega-Analysis With Data From the ENIGMA Obsessive-Compulsive Disorder Working Group
Frontiers in Neuroinformatics, ISSN: 1662-5196, Vol: 12, Page: 102
2019
- 68Citations
- 131Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations68
- Citation Indexes68
- 68
- Captures131
- Readers131
- 131
Article Description
Objective: Brain imaging communities focusing on different diseases have increasingly started to collaborate and to pool data to perform well-powered meta- and mega-analyses. Some methodologists claim that a one-stage individual-participant data (IPD) mega-analysis can be superior to a two-stage aggregated data meta-analysis, since more detailed computations can be performed in a mega-analysis. Before definitive conclusions regarding the performance of either method can be drawn, it is necessary to critically evaluate the methodology of, and results obtained by, meta- and mega-analyses. Methods: Here, we compare the inverse variance weighted random-effect meta-analysis model with a multiple linear regression mega-analysis model, as well as with a linear mixed-effects random-intercept mega-analysis model, using data from 38 cohorts including 3,665 participants of the ENIGMA-OCD consortium. We assessed the effect sizes and standard errors, and the fit of the models, to evaluate the performance of the different methods. Results: The mega-analytical models showed lower standard errors and narrower confidence intervals than the meta-analysis. Similar standard errors and confidence intervals were found for the linear regression and linear mixed-effects random-intercept models. Moreover, the linear mixed-effects random-intercept models showed better fit indices compared to linear regression mega-analytical models. Conclusions: Our findings indicate that results obtained by meta- and mega-analysis differ, in favor of the latter. In multi-center studies with a moderate amount of variation between cohorts, a linear mixed-effects random-intercept mega-analytical framework appears to be the better approach to investigate structural neuroimaging data.
Bibliographic Details
10.3389/fninf.2018.00102; 10.3389/fninf.2018.00102.s002; 10.3389/fninf.2018.00102.s001; 10.5167/uzh-165341
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85068366646&origin=inward; http://dx.doi.org/10.3389/fninf.2018.00102; http://www.ncbi.nlm.nih.gov/pubmed/30670959; https://www.frontiersin.org/articles/10.3389/fninf.2018.00102/supplementary-material/10.3389/fninf.2018.00102.s002; http://dx.doi.org/10.3389/fninf.2018.00102.s002; https://www.frontiersin.org/articles/10.3389/fninf.2018.00102/supplementary-material/10.3389/fninf.2018.00102.s001; http://dx.doi.org/10.3389/fninf.2018.00102.s001; https://www.frontiersin.org/article/10.3389/fninf.2018.00102/full; https://www.zora.uzh.ch/id/eprint/165341; https://dx.doi.org/10.3389/fninf.2018.00102; https://www.frontiersin.org/journals/neuroinformatics/articles/10.3389/fninf.2018.00102/full; http://dx.doi.org/10.5167/uzh-165341; https://dx.doi.org/10.5167/uzh-165341; https://www.zora.uzh.ch/id/eprint/165341/; https://www.frontiersin.org/articles/10.3389/fninf.2018.00102/full; https://www.zora.uzh.ch/id/eprint/165341/1/Boedhoe_et_al_2019_An_Empirical_Comparison_of.pdf
Frontiers Media SA
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know