Robust Team Communication Analytics with Transformer-Based Dialogue Modeling
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), ISSN: 1611-3349, Vol: 13916 LNAI, Page: 639-650
2023
- 3Citations
- 6Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Conference Paper Description
Adaptive training environments that can provide reliable insight into team communication offer great potential for team training and assessment. However, traditional techniques that enable meaningful analysis of team communication such as human transcription and speech classification are especially resource-intensive without machine assistance. Additionally, developing computational models that can perform robust team communication analytics based on small datasets poses significant challenges. We present a transformer-based team communication analysis framework that classifies each team member utterance according to dialogue act and the type of information flow exhibited. The framework utilizes domain-specific transfer learning of transformer-based language models pre-trained with large-scale external data and a prompt engineering method that represents both speaker utterances and speaker roles. Results from our evaluation of team communication data collected from live team training exercises suggest the transformer-based framework fine-tuned with team communication data significantly outperforms state-of-the-art models on both dialogue act recognition and information flow classification and additionally demonstrates improved domain-transfer capabilities.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85164956814&origin=inward; http://dx.doi.org/10.1007/978-3-031-36272-9_52; https://link.springer.com/10.1007/978-3-031-36272-9_52; https://dx.doi.org/10.1007/978-3-031-36272-9_52; https://link.springer.com/chapter/10.1007/978-3-031-36272-9_52
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know