Law, Technology and Patient Safety
Law, Technology, and Patient Safety, Vol: 68, Issue: 2, Page: 459
2019
- 255Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage255
- Downloads205
- Abstract Views50
Article Description
Medical error is the third leading cause of death in the United States, In an effort to increase patient safety, various regulatory agencies require reporting of adverse events, but reported counts tend to be inaccurate. In 2005, in an effort to reduce adverse event rates, Congress proposed a list of “never events,” adverse events, such as wrong-site surgery, that should never occur in hospitals, and authorized CMS to refuse payment for care required following such events. CMS has since pushed for further regulation, “such as putting more payment at risk, increasing transparency, increasing frequency of quality data reviews, and stepping up media scrutiny.” Evidence suggests these public reporting and pay-for-performance initiatives compel hospitals to manipulate reports, and in some cases patient treatment, to conceal adverse events.The purpose of this Essay is to consider how we might use law coupled with technological advances to increase adverse event count accuracy. On the technology front, three advances are particularly relevant. First, digitization of medical records, billing data, and other sources of germane information has made collecting large amounts of data easier than ever. Second, current adverse event counters employ powerful computer algorithms, and we’re likely moving towards detecting adverse events through analysis of large datasets using artificial intelligence. Third, governmental entities have started to team up with computer scientists who use cryptographic techniques to collect sensitive data in ways that protect the anonymity of data producers. We explore how law might harness the power of these technological developments to increase adverse event count accuracy without creating incentives for providers to hide data or alter treatment practices in harmful or wasteful ways.This Essay is organized as follows. Part II describes current methods used by hospitals, CMS and researchers to count adverse events. It also attempts to explain the wide disparities in counts produced by various counting methods. A close look at count disparities illuminates two problems with today’s methods. First, the most reliable count estimates are not generalizable. Second, evidence suggests that providers act to shroud true counts, sometimes in ways that put patients at risk. Part III suggests that recent technological advances might make it possible to use law to improve the accuracy of adverse event counts. In particular, we explore the law’s possible annexing of three technological advances—digitized patient data, artificial intelligence, and cryptography—to assemble a state-of-the-art adverse events dataset that could make it possible for policy makers, in conjunction with providers, to take well-informed steps towards increasing patient safety. Part IV discusses a number of possible hurdles and concludes.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know