What is it about?

We propose a deep-learning based model takes both audio and text as input and generates gestures as output. The resulting gestures can be applied to both virtual agents and humanoid robots. We evaluated our approach subjectively and objectively. The code and video are available on the project page svito-zar.github.io/gesticulator .

Featured Image

Why is it important?

While talking, people spontaneously gesticulate, which plays a key role in conveying information. Hence social agents (such as robots or virtual avatars) also need to gestuculate so that interactions with them is natural and smooth.

Perspectives

To my surprise and big joy "Gesticulator" won the Best Paper Award at ICMI 2020! I am greatful to all the co-authors for their contribution.

Taras Kucherenko
KTH Royal Institute of Technology

Read the Original

This page is a summary of: Gesticulator: A framework for semantically-aware speech-driven gesture generation, October 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3382507.3418815.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page