Data Annotation for Support Ticket Data: A Literature Review
2024
- 113Usage
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Usage113
- Downloads99
- Abstract Views14
Artifact Description
Supervised Machine Learning is still the most prevalent Machine Learning approach used across the field of Natural Language Processing. As it needs labels to work properly, labeling text data sets is a discerning step in supervised Machine Learning projects. Many industry projects involving supervised Machine Learning never reach a productive phase due to the absence of sufficient labeled data. Against this background, we conducted a Literature Review investigating state of the art approaches to label text data sets for later Natural Language Processing projects. We concentrated on solutions that could be applicable to annotate a support ticket data set. We found that there are three major approaches: Crowdsourcing, Learning Algorithms and Weak Supervision. We also found, that in annotation projects there seems to be an assessment between label quality and cost/effort. We discuss our findings and share our thoughts on the special challenges of annotating a support ticket data set.
Bibliographic Details
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know