
Geometry-aware 4D Video Generation for Robot Manipulation
Zeyi Liu1    Shuang Li1
   Eric Cousineau2    Siyuan Feng2   
Benjamin Burchfiel2    Shuran Song1
1Stanford University      2Toyota Research Institute
Understanding and predicting the dynamics of the physical world can enhance a robot's ability to plan and interact effectively in complex environments. While recent video generation models have shown strong potential in modeling dynamic scenes, generating videos that are both temporally coherent and geometrically consistent across camera views remains a significant challenge. To address this, we propose a 4D video generation model that enforces multi-view 3D consistency of videos by supervising the model with cross-view pointmap alignment during training. This geometric supervision enables the model to learn a shared 3D representation of the scene, allowing it to predict future video sequences from novel viewpoints based solely on the given RGB-D observations, without requiring camera poses as inputs. Compared to existing baselines, our method produces more visually stable and spatially aligned predictions across multiple simulated and real-world robotic datasets. We further show that the predicted 4D videos can be used to recover robot end-effector trajectories using an off-the-shelf 6DoF pose tracker, supporting robust robot manipulation and generalization to novel camera viewpoints.
Method

4D Video Generation for Robot Manipulation. Our model takes RGB-D observations from two camera views, and predicts future pointmaps and RGB videos. To ensure cross-view consistency, we apply cross-attention in the U-Net decoders for pointmap prediction. The resulting 4D video can be used to extract the 6DoF pose of the robot end-effector using pose tracking methods, enabling downstream manipulation tasks.