A Study of Finetuning Video Transformers for Multi-view Geometry Tasks

AAAI 2026

1The Hong Kong University of Science and Technology 2Microsoft Research Asia

TL;DR: We demonstrate that general-purpose video foundation model can be transferred to geometric tasks such as optical flow estimation, stereo matching and two-view depth estimation, achieving strong results.

Abstract

This paper presents an investigation of vision transformer learning for multi-view geometry tasks, such as optical flow estimation, by fine-tuning video foundation models. Unlike previous methods that involve custom architectural designs and task-specific pretraining, our research finds that general-purpose models pretrained on videos can be readily transferred to multi-view problems with minimal adaptation. The core insight is that general-purpose attention between patches learns temporal and spatial information for geometric reasoning. We demonstrate that appending a linear decoder to the Transformer backbone produces satisfactory results, and iterative refinement can further elevate performance to state-of-the-art levels. This conceptually simple approach achieves top cross-dataset generalization results for optical flow estimation with end-point error (EPE) of 0.69, 1.78, and 3.15 on the Sintel clean, Sintel final, and KITTI datasets, respectively. Our method additionally establishes a new record on the online test benchmark with EPE values of 0.79, 1.88, and F1 value of 3.79. Applications to 3D depth estimation and stereo matching also show strong performance, illustrating the ver- satility of video-pretrained models in addressing geometric vision tasks.


Figure 1. Overview of GeoViT.Part (a) presents adaptation of positional embeddings in pretrained 3D ViTs for two-frame tasks. The pretrained spatial Pos. Embd. are interpolated to match the desired input size. The pretrained temporal Pos. Embd., which accounts for 8 frames, is split into two halves. The average of the first half and second half corresponds to the temporal embedding of the source image and target image, respectively. Part (b) exhibits our iterative refinement decoding pipeline, using optical flow for illustration. The input target image is dynamically warped based on the last-step prediction \(g_{t-1}\) so that the input pair corresponds to the ground truth residual for this step. Then the source and (warped) target images are patchified, added with adapted positional embeddings, and fed to the pretrained 3D ViT for feature extraction. The decoder accepts source image features and the last-step prediction (\(g_{t-1}\)) and produces its correction \(\Delta g_t\). Adding \(g_{t-1}\) and its correction gives the current-step prediction \(g_{t}\).

Method

Our work transfers the encoder in video foundation models to multi-view geometry tasks. Any video foundation model whose encoder follows a transformer architecture can be used. To process the video data, the transformer splits the spatio-temporal data into 3D patches, adds spatial and temporal positional encodings, and then feeds the visual tokens into the self-attention blocks. To adapt pretrained 3D ViTs for two-frame tasks, we first interpolate the 2D spatial positional encodings to match the desired input size in the fine-tuning stage. See Figure 1(a) for a visualized demonstration.

Then we incorporate an iterative refinement mechanism to the 3D ViT. Given an image pair \(I_1, I_2\), the residual geometric property \(\Delta g_t\) at each iteration \(t\) is predicted as follows:

\[ \Delta g_t = F_{\text{dec}}(F_{\text{enc}}(I_1, \text{warp}(I_2, g_{t-1})), g_{t-1}), \] \[ g_t = g_{t-1} + \Delta g_t, \]

where \(F_{\text{enc}}\) denotes a spatiotemporal ViT that takes (warped) image pairs as input and returns features corresponding to source images. The source image features together with the current prediction are fed to the decoder \(F_{\text{dec}}\) to predict the residuals. The decoder is instantiated as a ConvGRU unit following RAFT. The warping operation takes an image as input and outputs another image according to the geometrical property \(g_{t-1}\). The warping operation for optical flow estimation and stereo matching is straightforward. For 3D depth estimation, we first convert the depth representation to pixel displacement using camera parameters, and convert the prediction results back to depth. The predicted residual is then aggregated with the prediction of the previous step. See Figure 1(b) for a demonstration.

Experiments

We conduct comprehensive experiments qualitatively and quantitatively for optical flow estimation, stereo matching and 3D depth estimation.

Experiments on optical flow estimation


Table 1. Experiments on Sintel and KITTI datasets. `A' denotes training on the autoflow dataset. `C + T' denotes training sequentially on the FlyingChairs and FlyingThings datasets. represents evaluation with tiling technique.

Table 2. Experiments on Sintel benchmark.'C + K + S + K + H' denotes finetuning on the combined Sintel, KITTI, and HD1K training sets after 'C + T' training. * denotes methods with warm-start proposed in RAFT, where the flow is initialized with previous image pair flow estimation. represents evaluation with tiling technique. Our approach ranks 1st on the Sintel benchmark.

Table 3. Experiments on KITTI benchmark.'C + K + S + K + H' denotes finetuning on the combined Sintel, KITTI, and HD1K training sets after 'C + T' training. represents evaluation with tiling technique.

Figure 2. Visualized prediction comparison on Sintel (clean) dataset. Our approach is more accurate with more fine-grained estimates (human shoulder region in case #1), a higher recall of small objects (bird region in case #2), and a crisper motion boundary (case #2). The highlighted region is zoomed in for better visual comparison.

Experiments on stereo matching

Table 4. Stereo performance on ETH3D stereo test set. Our method achieves superior performance on two of the three metrics.

Experiments on two-view depth estimation

Table 5. Depth performance on RGBD-SLAM, SUN3D and Scenes11 test datasets. Our approach obtains better or comparable performance.

BibTeX