NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer

ICLR2025

Meng You1†, Zhiyu Zhu1†*, Hui Liu2, Junhui Hou1,
1City University of Hong Kong 2Saint Francis University

Visual demonstrations of NVS-Solver with i nput of (a) single-view, (b) monocular video, and (c) multi-view (2 views). The middle row refers to the algorithm’s inputs for each scenario.

Abstract

By harnessing the potent generative capabilities of pre-trained large video diffusion models, we propose a new novel view synthesis paradigm that operates without the need for training. The proposed method adaptively modulates the diffusion sampling process with the given views to enable the creation of visually pleasing results from single or multiple views of static scenes or monocular videos of dynamic scenes. Specifically, built upon our theoretical modeling, we iteratively modulate the score function with the given scene priors represented with warped input views to control the video diffusion process. Moreover, by theoretically exploring the boundary of the estimation error, we achieve the modulation in an adaptive fashion according to the view pose and the number of diffusion steps. Extensive evaluations on both static and dynamic scenes substantiate the significant superiority of our method over state-of-the-art methods both quantitatively and qualitatively.

Method

image.

Video

BibTeX

@inproceedings{you2025nvs,
      title={NVS-Solver: Video Diffusion Model as Zero-Shot Novel View Synthesizer},
      author={You, Meng and Zhu, Zhiyu and Liu, Hui and Hou, Junhui},
      booktitle={International Conference on Learning Representations},
      year={2025}
      }