Reformulation of symptom descriptions in dialogue systems for fault diagnosis: How to ask for clarification?
International Journal of Human-Computer Studies, ISSN: 1071-5819, Vol: 145, Page: 102516
2021
- 3Citations
- 44Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Article Description
Psycholinguistic research can inform the design of dialogue systems for fault diagnosis. When users provide ambiguous symptom descriptions, dialogue systems can reformulate these descriptions to check the correctness of their interpretation. The present study investigated whether such reformulations should be performed by users or dialogue systems, and how they should be phrased. In a Wizard-of-Oz study, subjects described faults symptoms to a chatbot, which subsequently asked for clarification. Experiment 1 compared the effects of requests for subjects to self-correct their descriptions and reformulations provided by the dialogue system in either common or technical terms. Experiment 2 combined reformulations in common and technical terms with each other and with pictures of fault symptoms. The results revealed that requests for self-correction increased solution times and verbal effort, that common terms decreased solution times but led to errors when seemingly easy reformulations were incorrect, and that technical terms did not mislead subjects into accepting them uncritically. Enrichments with pictures reduced the risk of accepting incorrect reformulations and were considered particularly helpful when combined with common terms. Lexical alignment with dialogue system reformulations was low, but subjects adopted its technical terms most readily when no common terms were provided. Taken together, the results suggest that combining reformulations in everyday language with visual information is most suitable to support grounding.
Bibliographic Details
http://www.sciencedirect.com/science/article/pii/S107158192030118X; http://dx.doi.org/10.1016/j.ijhcs.2020.102516; http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=85088359782&origin=inward; https://linkinghub.elsevier.com/retrieve/pii/S107158192030118X; https://dx.doi.org/10.1016/j.ijhcs.2020.102516
Elsevier BV
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know