Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation

1Alibaba DAMO Academy, 2Fudan University, 3Hupan Lab
Given a single-view image across various domains (real-world, text-to-image, animation), the proposed Uni3C can generate impressive videos under arbitrary camera trajectories, human motion characters or both of these conditions. Uni3C further supports the camera-controlled motion transfer. Note that no joint training is required for Uni3C.



Abstract

Camera and human motion controls have been extensively studied for video generation, but existing approaches typically address them separately, suffering from limited data with high-quality annotations for both aspects. To overcome this, we present Uni3C, a unified 3D-enhanced framework for precise controls of both camera and human motion in video generation. Uni3C includes two key contributions. First, we propose a plug-and-play control module trained with a frozen video generative backbone, PCDController, which utilizes unprojected point clouds from monocular depth to achieve accurate camera control. By leveraging the strong 3D priors of point clouds and the powerful capacities of video foundational models, PCDController shows impressive generalization, performing well regardless of whether the inference backbone is frozen or fine-tuned. This flexibility enables different modules of Uni3C to be trained in specific domains, \textit{i.e.}, either camera control or human motion control, reducing the dependency for jointly annotated data. Second, we propose a jointly aligned 3D world guidance for the inference phase that seamlessly integrates both scenic point clouds and SMPL-X characters to unify the control signals for camera and human motion, respectively. Extensive experiments confirm that PCDController enjoys strong robustness in driving camera motion for fine-tuned backbones of video generation. Uni3C substantially outperforms competitors in both camera controllability and human motion quality. Additionally, we collect tailored validation sets featuring challenging camera movements and human actions to validate the effectiveness of our method.

Method

The assignment of multi-modal conditions for Uni3C.

The overview pipeline of PCDController. PCDController is built as a lightweight DiT trained from scratch. We first obtain the point clouds via the monocular depth extracted from the first view. Then, the point clouds are warped and rendered into the video. The input conditions for PCDController comprise rendered point clouds, Plücker ray, and the noisy latent. Note that only the PCDController and camera encoder are trainable in our framework.

The overview of global 3D world guidance. (a) We first align the SMPL-X characters from the human world space to the environment world space comprising dense point clouds. (b) GeoCalib is used to calibrate the gravity direction of SMPL-X. (c) The rigid transformation coefficients are employed to align the whole SMPL-X sequence. We re-render all aligned conditions under specific camera trajectories as the global 3D world guidance.

More impressive camera control results

More unified (human and camera) control results

Animate anyone under any viewpoints with any motions

BibTeX


      @article{cao2025uni3c,
        title={Uni3C: Unifying Precisely 3D-Enhanced Camera and Human Motion Controls for Video Generation},
        author={Cao, Chenjie and Zhou, Jingkai and Li, shikai and Liang, Jingyun and Yu, Chaohui and Wang, Fan and Xue, Xiangyang and Fu, Yanwei},
        journal={arXiv preprint arXiv:2504.14899},
        year={2025}
      }