What is it about?
We propose a deep-learning based model takes both audio and text as input and generates gestures as output. The resulting gestures can be applied to both virtual agents and humanoid robots. We evaluated our approach subjectively and objectively. The code and video are available on the project page svito-zar.github.io/gesticulator .
Featured Image
Photo by Product School on Unsplash
Why is it important?
While talking, people spontaneously gesticulate, which plays a key role in conveying information. Hence social agents (such as robots or virtual avatars) also need to gestuculate so that interactions with them is natural and smooth.
Perspectives
Read the Original
This page is a summary of: Gesticulator: A framework for semantically-aware speech-driven gesture generation, October 2020, ACM (Association for Computing Machinery),
DOI: 10.1145/3382507.3418815.
You can read the full text:
Resources
Contributors
The following have contributed to this page