Stay updated daily with trending AI research
7 days free trialPick your own topicsAutomated AI summaries

OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human Animation Models

OmniHuman
human animation
video generation
data scaling
mixed conditioning
arXiv:2502.01061 - [arXivPDF]
162
20
210
OmniHuman-1: Rethinking the Scaling-Up of One-Stage Conditioned Human
  Animation Models
Abstract
End-to-end human animation, such as audio-driven talking human generation, has undergone notable advancements in the recent few years. However, existing methods still struggle to scale up as large general video generation models, limiting their potential in real applications. In this paper, we propose OmniHuman, a Diffusion Transformer-based framework that scales up data by mixing motion-related conditions into the training phase. To this end, we introduce two training principles for these mixed conditions, along with the corresponding model architecture and inference strategy. These designs enable OmniHuman to fully leverage data-driven motion generation, ultimately achieving highly realistic human video generation. More importantly, OmniHuman supports various portrait contents (face close-up, portrait, half-body, full-body), supports both talking and singing, handles human-object interactions and challenging body poses, and accommodates different image styles. Compared to existing end-to-end audio-driven methods, OmniHuman not only produces more realistic videos, but also offers greater flexibility in inputs. It also supports multiple driving modalities (audio-driven, video-driven and combined driving signals). Video samples are provided on the ttfamily project page (https://omnihuman-lab.github.io)
162
20
210
Sign up to continue reading AI summary
Stay updated on the latest trending research with our newsletter. Never miss a release date!
Sign Up
© 2025 Adaptive Plus Inc.1216 Broadway, Suite 213,575 Market Str, San Francisco, CA