Research Papers
INFP: Audio-Driven Interactive Head Generation in Dyadic Conversations
Imagine having a conversation with a socially intelligent agent. It can
attentively listen to your words and offer visual and linguistic feedback
promptly. This seamless interaction allows for multiple rounds of conversation
to flow smoothly and naturally. In pursuit of actualizing it, we propose INFP,
a novel audio-driven head generation framework for dyadic interaction. Unlike
previous head generation works that only focus on single-sided communication,
or require manual role assignment and explicit role switching, our model drives
the agent portrait dynamically alternates between speaking and listening state,
guided by the input dyadic audio. Specifically, INFP comprises a Motion-Based
Head Imitation stage and an Audio-Guided Motion Generation stage. The first
stage learns to project facial communicative behaviors from real-life
conversation videos into a low-dimensional motion latent space, and use the
motion latent codes to animate a static image. The second stage learns the
mapping from the input dyadic audio to motion latent codes through denoising,
leading to the audio-driven head generation in interactive scenarios. To
facilitate this line of research, we introduce DyConv, a large scale dataset of
rich dyadic conversations collected from the Internet. Extensive experiments
and visualizations demonstrate superior performance and effectiveness of our
method. Project Page: https://grisoon.github.io/INFP/.