Grow with the Flow:
4D Reconstruction of Growing Plants with Gaussian Flow Fields

1 University of Toronto 2 Vector Institute 3 Simon Fraser University
Paper Code Dataset Weights

Abstract

Modeling the time-varying 3D appearance of plants during their growth poses unique challenges: unlike many dynamic scenes, plants generate new geometry over time as they expand, branch, and differentiate. Recent motion modeling techniques are ill-suited to this problem setting. For example, deformation fields cannot introduce new geometry, and 4D Gaussian splatting constrains motion to a linear trajectory in space and time and cannot track the same set of Gaussians over time. Here, we introduce a 3D Gaussian flow field representation that models plant growth as a time-varying derivative over Gaussian parameters---position, scale, orientation, color, and opacity---enabling nonlinear and continuous-time growth dynamics. To initialize a sufficient set of Gaussian primitives, we reconstruct the mature plant and learn a process of reverse growth, effectively simulating the plant’s developmental history in reverse. Our approach achieves superior image quality and geometric accuracy compared to prior methods on multi-view timelapse datasets of plant growth, providing a new approach for appearance modeling of growing 3D structures.


Method

  architecture

a) Our method first optimizes a set of 3D Gaussians on the fully-grown plant. b) Using the optimized 3D Gaussians from the fully-grown plant, we progressively train the dynamics model to learn the state of the plant at each timestep. After each reconstructed timestep, we cache the Gaussians for that timestep and use them as initial conditions to optimize for the next timestep. c) During the global optimization step, we randomly sample a timestep tk and integrate to tk+1, leveraging the cached Gaussians from the boundary reconstruction step as initial conditions. We then optimize the dynamics model to enforce consistency between rendered and captured measurements.


Hardware Prototype

The hardware system consists of a camera attached to a Raspberry Pi imaging a plant that sits on an automated turntable. Multi-view image measurements of the plant are automatically captured at 15-minute intervals without any human intervention.


Synthetic Results

Interpolated Novel View Renders

We compare our method against baselines on interpolated novel view synthesis across seven scenes: Clematis, Tulip, and Plant1-5. All methods are trained on 12 equally spaced timesteps out of 70 total captured timesteps.

GT

Ours

Dynamic 3DGS

4D-GS

4DGS

Clematis

Tulip

Plant1

Plant2

Plant3

Plant4

Plant5


Interpolated Point Cloud Trajectories

We compare interpolated point cloud trajectories across methods. Models are trained on 12 equally-spaced timesteps and evaluated on trajectory interpolation across all 70 timesteps.

Ours

Dynamic 3DGS

4D-GS

4DGS

Clematis

Tulip

Plant1

Plant2

Plant3

Plant4

Plant5


Captured Results

Interpolated Novel View Renders

We compare our method on the Rose and Corn scenes against baselines on interpolated novel view synthesis. For the rose, all methods are trained on 6 equally spaced timesteps out of a total of 86 captured timesteps. For the corn, all methods are trained on 8 equally-spaced timesteps out of 71 captured timesteps.

GT

Ours

Dynamic 3DGS

4D-GS

4DGS

Rose novel view

Corn novel view


Interpolated Point Cloud Trajectories

Ours

Dynamic 3DGS

4D-GS

4DGS

Rose

Corn