Reconceptualizing variable rater assessments as both an educational and clinical care problem.
Academic medicine : journal of the Association of American Medical Colleges, Vol: 89, Issue: 5, Page: 721-727
2014
- 1Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage1
- Abstract Views1
Article Description
The public is calling for the U.S. health care and medical education system to be accountable for ensuring high-quality, safe, effective, patient-centered care. As medical education shifts to a competency-based training paradigm, clinician educators' assessment of and feedback to trainees about their developing clinical skills becomes paramount. However, there is substantial variability in the accuracy, reliability, and validity of the assessments faculty make when they directly observe trainees with patients. These difficulties have been treated primarily as a rater cognition problem focusing on the inability of the assessor to make reliable and valid assessments of the trainee.The authors' purpose is to reconceptualize the rater cognition problem as both an educational and clinical care problem. The variable quality of faculty assessments is not just a psychometric predicament but also an issue that has implications for decisions regarding trainee supervision and the delivery of quality patient care. The authors suggest that the frame of reference for rating performance during workplace-based assessments be the ability to provide safe, effective, patient-centered care. The authors developed the Accountable Assessment for Quality Care and Supervision equation to remind faculty that supervision is a dynamic, complex process essential for patients to receive high-quality care. This fundamental shift in how assessment is conceptualized requires new models of faculty development and emphasizes the essential and irreplaceable importance of the clinician educator in trainee assessment.
Bibliographic Details
Published for the Association of American Medical Colleges by Lippincott Williams & Wilkins
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know