In recent years, modeling complex real world objects and scenes using cameras has been an active research topic in both graphics and vision. Work has been done creating 3D models of flowers, trees, hairs, urban building, human motion and cloth. However, there has previously not been a successful reconstruction of water from video. Water’s complex shape causes even the best matching methods to yield poor depth maps. Its dynamic nature and complex topological changes over time make human refinement too tedious for most applications.
Project members Huamin Wang, Miao Liao, Qing Zhang, Ruigang Yang, and Greg Turk in partnership with the Georgia Institute of Technology have created an image-based reconstruction framework to model real water scenes captured by stereoscopic video. The goal of the project is to efficiently reconstruct realistic 3D fluid animations with physical soundness. Experiments have shown that the system can create results that faithfully inherit the nuances and details of fluids from regular video input. Potentially this system will allow artists to design and modify a coarse initial shape in order to create stylized animations. The method may have applications in feature-film special effects and in video game content creation.