Application of Dupont’s Dirty Dozen Framework to Commercial Aviation Maintenance Incidents
2018
- 2,489Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage2,489
- Downloads1,844
- 1,844
- Abstract Views645
Thesis / Dissertation Description
This study examined the 12 preconditions for maintenance errors commonly known as the Dirty Dozen and applied them to actual incident and accident data provided by a participating airline (PA). The data provided by the PA consisted of Maintenance Event Reports (MERs) (reactive), Maintenance Operations Safety Assessment (MOSA) reports (proactive), and the results of the 2017 Maintenance Climate Awareness Survey (MCAS) (subjective). The MER and MOSA reports were coded by aviation maintenance subject matter experts (SMEs) using the 12 Dirty Dozen categories as the coding scheme, while the MCAS responses were parsed according to the precondition category they best represented. An examination and qualitative analysis of these data sets as they related to the Dirty Dozen categories answered the following research questions: (1) How does the reactive data (MER) analysis compare to the proactive (MOSA) analysis in terms of the Dirty Dozen? Do they echo similar Dirty Dozen categories, or do they seem to reflect different aspects of the Dirty Dozen? (2) What other preconditions for maintenance error become apparent from the analyses? What do they have in common? How complete is the Dirty Dozen? (3) What insights can be gleaned from the subjective report data (MCAS) with regard to maintenance personnel’s perceptions of the organization’s safety culture? The results revealed not only the presence of each Dirty Dozen category to some degree, but also the difference in sensitivity of the MER (reactive) and MOSA (proactive) to the 12 Dirty Dozen categories. Recommendations for practice and future research are discussed.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know