Modeling the time-varying 3D appearance of plants during growth poses unique challenges: unlike most dynamic scenes, plants continuously generate new geometry as they expand, branch, and differentiate. Existing dynamic scene representations are ill-suited to this setting: deformation fields provide insufficient constraints to yield physically plausible scene dynamics, and 4D Gaussian splatting represents the same physical structures with different Gaussian primitives at different times, breaking temporal consistency. We introduce GrowFlow, a dynamic representation that couples 3D Gaussian primitives with a neural ordinary differential equation to model plant growth as a continuous flow field over geometric parameters (position, scale, and orientation). Our representation enables consistent appearance rendering and models nonlinear, continuous-time growth dynamics with full temporal correspondences for every primitive. To initialize a sufficient set of Gaussian primitives, we first reconstruct the mature plant and then learn a reverse-growth process, effectively simulating the plant's developmental history in reverse. GrowFlow achieves superior image quality and geometric coherence compared to prior methods on a new, multi-view timelapse dataset of plant growth, and provides the first temporally coherent representation for appearance modeling of growing 3D structures.
a) Our method first optimizes a set of 3D Gaussians on the fully-grown plant. b) Using the optimized 3D Gaussians from the fully-grown plant, we progressively train the dynamics model to learn the state of the plant at each timestep. After each reconstructed timestep, we cache the Gaussians for that timestep and use them as initial conditions to optimize for the next timestep. c) During the global optimization step, we randomly sample a timestep tk and integrate to tk+1, leveraging the cached Gaussians from the boundary reconstruction step as initial conditions. We then optimize the dynamics model to enforce consistency between rendered and captured measurements.
The hardware system consists of a camera attached to a Raspberry Pi imaging a plant that sits on an automated turntable. Multi-view image measurements of the plant are automatically captured at 15-minute intervals without any human intervention.
We compare our method against baselines on interpolated novel view synthesis across seven scenes: Clematis, Tulip, and Plant1-5. All methods are trained on 12 equally spaced timesteps out of 70 total captured timesteps.
GT
Ours
Dynamic 3DGS
4D-GS
4DGS
Clematis
Tulip
Plant1
Plant2
Plant3
Plant4
Plant5
We compare interpolated point cloud trajectories across methods. Models are trained on 12 equally-spaced timesteps and evaluated on trajectory interpolation across all 70 timesteps.
Ours
Dynamic 3DGS
4D-GS
4DGS
Clematis
Tulip
Plant1
Plant2
Plant3
Plant4
Plant5
We compare our method on the Paperwhite, Blooming flower and Corn scenes against baselines on interpolated novel view synthesis. For the paperwhite, all methods are trained on 8 equally-spaced timesteps out of 50 captured timesteps. For the blooming flower, all methods are trained on 6 equally spaced timesteps out of a total of 86 captured timesteps. For the corn, all methods are trained on 8 equally-spaced timesteps out of 71 captured timesteps.
GT
Ours
Dynamic 3DGS
4D-GS
4DGS
Paperwhite novel view
Blooming flower novel view
Corn novel view
Ours
Dynamic 3DGS
4D-GS
4DGS
Paperwhite
Blooming flower
Corn
@article{luo2026grow,
title = {Grow with the Flow: 4D Reconstruction of Growing Plants with Gaussian Flow Fields},
author = {Luo, Weihan and Goli, Lily and Bahmani, Sherwin and Taubner, Felix and Tagliasacchi, Andrea and Lindell, David B},
journal = {arXiv preprint arXiv:2602.08958},
year = {2026}
}