PlumX Metrics
Embed PlumX Metrics

DiT-Gesture: A Speech-Only Approach to Stylized Gesture Generation

Electronics (Switzerland), ISSN: 2079-9292, Vol: 13, Issue: 9
2024
  • 0
    Citations
  • 0
    Usage
  • 8
    Captures
  • 2
    Mentions
  • 0
    Social Media
Metric Options:   Counts1 Year3 Year

Metrics Details

  • Captures
    8
  • Mentions
    2
    • Blog Mentions
      1
      • Blog
        1
    • News Mentions
      1
      • 1

Most Recent News

New Findings from Macau University of Science and Technology in the Area of Electronics Described (DiT-Gesture: A Speech-Only Approach to Stylized Gesture Generation)

2024 MAY 15 (NewsRx) -- By a News Reporter-Staff News Editor at Electronics Daily -- New study results on electronics have been published. According to

Article Description

The generation of co-speech gestures for digital humans is an emerging area in the field of virtual human creation. Prior research has progressed by using acoustic and semantic information as input and adopting a classification method to identify the person’s ID and emotion for driving co-speech gesture generation. However, this endeavor still faces significant challenges. These challenges go beyond the intricate interplay among co-speech gestures, speech acoustic, and semantics; they also encompass the complexities associated with personality, emotion, and other obscure but important factors. This paper introduces “DiT-Gestures”, a speech-conditional diffusion-based and non-autoregressive transformer-based generative model with the WavLM pre-trained model and a dynamic mask attention network (DMAN). It can produce individual and stylized full-body co-speech gestures by only using raw speech audio, eliminating the need for complex multimodal processing and manual annotation. Firstly, considering that speech audio contains acoustic and semantic features and conveys personality traits, emotions, and more subtle information related to accompanying gestures, we pioneer the adaptation of WavLM, a large-scale pre-trained model, to extract the style from raw audio information. Secondly, we replace the causal mask by introducing a learnable dynamic mask for better local modeling in the neighborhood of the target frames. Extensive subjective evaluation experiments are conducted on the Trinity, ZEGGS, and BEAT datasets to confirm WavLM’s and the model’s ability to synthesize natural co-speech gestures with various styles.

Bibliographic Details

Provide Feedback

Have ideas for a new metric? Would you like to see something else here?Let us know