Motion decoupling technology realizes dynamic control of film and television
The framework processes camera motion parameters (including 6DoF data such as pitch angle and azimuth) and object motion field (using 3D optical flow representation) separately through a two-branch neural network architecture. The JSON configuration file allows the user to precisely set: 1) the camera to adopt orbit-surround or first-person propulsion mode; 2) the object motion to support keyframe interpolation or physics simulation drive. In automotive scenario testing, the system can automatically separate body movement from camera jitter, with a motion separation accuracy of 89.2%, far exceeding NVIDIA's Vid2Vid solution. The technology has been used for background animation generation in several meta-universe projects.
This answer comes from the articleGenXD: open source framework for generating videos of arbitrary 3D and 4D scenesThe































