Cross-modal prediction in speech depends on prior linguistic experience
Experimental Brain Research, ISSN: 1432-1106, Vol: 225, Issue: 4, Page: 499-511
2013
- 6Citations
- 61Captures
Metric Options: CountsSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations6
- Citation Indexes6
- CrossRef6
- Captures61
- Readers61
- 61
Article Description
The sight of a speaker's facial movements during the perception of a spoken message can benefit speech processing through online predictive mechanisms. Recent evidence suggests that these predictive mechanisms can operate across sensory modalities, that is, vision and audition. However, to date, behavioral and electrophysiological demonstrations of cross-modal prediction in speech have considered only the speaker's native language. Here, we address a question of current debate, namely whether the level of representation involved in cross-modal prediction is phonological or pre-phonological. We do this by testing participants in an unfamiliar language. If cross-modal prediction is predominantly based on phonological representations tuned to the phonemic categories of the native language of the listener, then it should be more effective in the listener's native language than in an unfamiliar one. We tested Spanish and English native speakers in an audiovisual matching paradigm that allowed us to evaluate visual-to-auditory prediction, using sentences in the participant's native language and in an unfamiliar language. The benefits of cross-modal prediction were only seen in the native language, regardless of the particular language or participant's linguistic 2013background. This pattern of results implies that cross-modal visual-to-auditory prediction during speech processing makes strong use of phonological representations, rather than low-level spatiotemporal correlations across facial movements and sounds. © Springer-Verlag Berlin Heidelberg 2013.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=84876679452&origin=inward; http://dx.doi.org/10.1007/s00221-012-3390-3; http://www.ncbi.nlm.nih.gov/pubmed/23386124; http://link.springer.com/10.1007/s00221-012-3390-3; https://dx.doi.org/10.1007/s00221-012-3390-3; https://link.springer.com/article/10.1007/s00221-012-3390-3; https://research-repository.uwa.edu.au/en/publications/173eb11b-63af-4934-bab9-60c6872cd0e3; https://research-repository.uwa.edu.au/en/publications/cross-modal-prediction-in-speech-depends-on-prior-linguistic-expe; http://link.springer.com/article/10.1007%2Fs00221-012-3390-3; https://link.springer.com/content/pdf/10.1007%2Fs00221-012-3390-3.pdf; http://link.springer.com/content/pdf/10.1007/s00221-012-3390-3; http://link.springer.com/article/10.1007/s00221-012-3390-3; http://link.springer.com/content/pdf/10.1007%2Fs00221-012-3390-3.pdf; http://research-repository.uwa.edu.au/en/publications/crossmodal-prediction-in-speech-depends-on-prior-linguistic-experience(173eb11b-63af-4934-bab9-60c6872cd0e3).html; https://research-repository.uwa.edu.au/en/publications/crossmodal-prediction-in-speech-depends-on-prior-linguistic-experience(173eb11b-63af-4934-bab9-60c6872cd0e3).html
Springer Science and Business Media LLC
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know