Don't waste your time measuring intelligence: Further evidence for the validity of a three-minute speeded reasoning test
Intelligence, ISSN: 0160-2896, Vol: 102, Page: 101804
2024
- 2Citations
- 18Captures
- 2Mentions
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Most Recent Blog
Who Knows
Social Psychology
Article Description
The rise of large-scale collaborative panel studies has generated a need for fast, reliable, and valid assessments of cognitive abilities. In these studies, a detailed characterization of participants' cognitive abilities is often unnecessary, leading to the selection of tests based on convenience, duration, and feasibility. This often results in the use of abbreviated measures or proxies, potentially compromising their reliability and validity. Here we evaluate the mini-q (Baudson & Preckel, 2016), a three-minute speeded reasoning test, as a brief assessment of general cognitive abilities. The mini-q exhibited excellent reliability (0.96–0.99) and a substantial correlation with general cognitive abilities measured with a comprehensive test battery ( r = 0.57; age-corrected r = 0.50), supporting its potential as a brief screening of cognitive abilities. Working memory capacity accounted for the majority (54%) of the association between test performance and general cognitive abilities, whereas individual differences in processing speed did not contribute to this relationship. Our results support the notion that the mini-q can be used as a brief, reliable, and valid assessment of general cognitive abilities. We therefore developed a computer-based version, ensuring its adaptability for large-scale panel studies. The paper- and computer-based versions demonstrated scalar measurement invariance and can therefore be used interchangeably. We provide norm data for young (18 to 30 years) and middle-aged (31 to 60 years) adults and provide recommendations for incorporating the mini-q in panel studies. Additionally, we address potential challenges stemming from language diversity, wide age ranges, and online testing in such studies.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S0160289623000855; http://dx.doi.org/10.1016/j.intell.2023.101804; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85179085246&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S0160289623000855; https://dx.doi.org/10.1016/j.intell.2023.101804
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know