Real-Time Control of an Articulatory-Based Speech Synthesizer for Brain Computer Interfaces
PLoS Computational Biology, ISSN: 1553-7358, Vol: 12, Issue: 11, Page: e1005119
2016
- 61Citations
- 109Captures
- 5Mentions
Metric Options: Counts1 Year3 YearSelecting the 1-year or 3-year option will change the metrics count to percentiles, illustrating how an article or review compares to other articles or reviews within the selected time period in the same journal. Selecting the 1-year option compares the metrics against other articles/reviews that were also published in the same calendar year. Selecting the 3-year option compares the metrics against other articles/reviews that were also published in the same calendar year plus the two years prior.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Example: if you select the 1-year option for an article published in 2019 and a metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019. If you select the 3-year option for the same article published in 2019 and the metric category shows 90%, that means that the article or review is performing better than 90% of the other articles/reviews published in that journal in 2019, 2018 and 2017.
Citation Benchmarking is provided by Scopus and SciVal and is different from the metrics context provided by PlumX Metrics.
Metrics Details
- Citations61
- Citation Indexes61
- 61
- CrossRef36
- Captures109
- Readers109
- 109
- Mentions5
- News Mentions3
- News3
- Blog Mentions2
- Blog2
Most Recent Blog
Voice synthesiser produces speech from muscle movement
It's a step towards giving speech back to people with paralysis.
Most Recent News
Researchers Are Translating Brain Activity Into Speech
S cientists are getting closer to developing a computer-generated tool to allow people with severe speech impairments — like the late cosmologist Stephen Hawking — to communicate verbally. In
Article Description
Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer.
Bibliographic Details
http://www.scopus.com/inward/record.url?partnerID=HzOxMe3b&scp=84999828343&origin=inward; http://dx.doi.org/10.1371/journal.pcbi.1005119; http://www.ncbi.nlm.nih.gov/pubmed/27880768; https://dx.plos.org/10.1371/journal.pcbi.1005119; https://dx.doi.org/10.1371/journal.pcbi.1005119; https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005119
Public Library of Science (PLoS)
Provide Feedback
Have ideas for a new metric? Would you like to see something else here?Let us know