ZeroHSI: Zero-Shot 4D Human-Scene Interaction
by Video Generation

Hongjie Li*     Hong-Xing "Koven" Yu*     Jiaman Li     Jiajun Wu    
Stanford University   
*Contributed Equally  
Research Poster
ZeroHSI takes a 3D scene, an interactable object, a language description, and initial states as input to synthesize both human and dynamic object motion sequences. By distilling interactions from video generation models, ZeroHSI enables zero-shot synthesis of natural human-scene interactions in various 3D environments, supporting both static scenes and dynamic object manipulations even in reconstructed real-world scenes.

Real Scene Demos

Comparison of Dynamic Scenario

Comparison of Static Scenario

Abstract

Human-scene interaction (HSI) generation is crucial for applications in embodied AI, virtual reality, and robotics. While existing methods can synthesize realistic human motions in 3D scenes and generate plausible human-object interactions, they heavily rely on datasets containing paired 3D scene and motion capture data, which are expensive and time-consuming to collect across diverse environments and interactions. We present ZeroHSI, a novel approach that enables zero-shot 4D human-scene interaction synthesis by integrating video generation and neural human rendering. Our key insight is to leverage the rich motion priors learned by state-of-the-art video generation models, which have been trained on vast amounts of natural human movements and interactions, and use differentiable rendering to reconstruct human-scene interactions. ZeroHSI can synthesize realistic human motions in both static scenes and environments with dynamic objects, without requiring any ground-truth motion data. We evaluate ZeroHSI on a curated dataset of different types of various indoor and outdoor scenes with different interaction prompts, demonstrating its ability to generate diverse and contextually appropriate human-scene interactions.

Method Overview

Research Poster
Our approach begins with HSI video generation conditioned on the rendered initial state and text prompt. Through differentiable neural rendering, we optimize per-frame camera pose, human pose parameters, and object 6D pose by minimizing the discrepancy between the rendered and generated reference videos.

The AnyInteraction Dataset

Figure 00
Bedroom
Figure 01
Living Room
Figure 02
Gym
Figure 03
Bar
Figure 04
Playground
Figure 05
Greenhouse
Figure 06
Cafe
Figure 07
Store
Figure 08
Garden
Figure 09
Bicycle
Figure 10
Room
Figure 11
Truck
Big Figure 00
Big Figure 01
Big Figure 02
Big Figure 03
Big Figure 04
Big Figure 05
Big Figure 06
Big Figure 07
Big Figure 08
Big Figure 09
Big Figure 10
Big Figure 11
We curate the AnyInteraction dataset to evaluate zero-shot HSI synthesis. The dataset encompasses 12 diverse scenes (7 indoor and 5 outdoor) from various sources including TRUMANS, public 3D assets, and reconstructed real scenes from Mip-NeRF 360 and Tanks and Temples datasets. It features 7 types of rigid dynamic objects (Guitar, Barbell, Watering Can, Office Chair, Shopping Cart, Vase, and Mower), supporting 22 evaluation instances that include 13 static and 9 dynamic object interaction.