For several years now, Google Street View has brought 360° photo “maps” into the homes of average citizens. With a few mouse clicks, internet users can drag “peg man” through the cities of the world, exploring a virtual world of photographs that have been sewn together. Google’s system is optimized to rapidly capture and provide a high-volume of 2D images.
But professionals such as emergency responders, military technicians, or urban planners need a higher-quality image augmented by three dimensional data. Researchers at the Vis Center believe that the synthesis of 3D point clouds and 2D photo imaging will provide the answers to that problem.
Currently there are two types of 3D mapping systems: aerial and stationary. Aerial systems are high speed, but their 3D point clouds tend to be very sparse (1 to 2 points per meter). Stationary systems capture more dense point clouds but work more slowly to gather that information.
Led by Dr. Ruigang Yang, the Vis Center research team has assembled a mobile scanning system that overcomes both the speed and quality issues of previous methods. Dr. Yang’s system consists of two LIDAR (laser ranging) sensor heads, a GPS and inertial measurement unit, a spherical digital camera, and processing software. Currently, they are able to gather scans that are twenty times more dense than aerial systems while moving much faster than current stationary systems.
The process begins when a laptop inside the vehicle tells the Ladybug camera to take a picture. The Ladybug sends a “time stamp” trigger to the PCS computer that controls the Optech GPS/LIDAR system. The computer retrieves the time stamp from the Optech system and sends it to the laptop providing the time stamp for the Ladybug.
The LIDAR system gathers time, location, angle, intensity, and distance information for 200 points per second, while the camera is gathering photographic images. The photograph is then used to colorize the dense 3D point cloud generated by the LIDAR. This unique fusion of active scanning (LIDAR) with high-quality, high-volume passive scanning (photography) provides the rapid, high-quality scan that could be useful in many emergency situations. This technology also has potential application for geological and archaeological studies, construction, city planning, law enforcement, survey mapping, and 3D imaging.
But challenges still remain. In dense urban areas, the large buildings often interfere with the GPS signal, allowing for errors in the point cloud. The team is also working to stretch a photographic “skin” over the 3D point cloud, filling in the holes inherent to a 3D point cloud. Eventually, the hope is to create a system where users could even apply photos onto a 3D background.
Currently, they are working to create a cityscape database of scans for the city of Lexington, KY with scans gathered for a full twenty-four hour period. With this additional element of time, the scans become four dimensional, allowing for the simulation of day and night in the 3D colorized images. Once completed, the database will be shared to enable research beyond the fields of graphics and imaging. There will be possible applications for data compression, transmission, visualization, index and retrieval and computational geometry.
But for now, “we just focus on getting high quality models,” says Dr. Yang. “Getting the geometry correct is the first step to getting realistic 3D visualization.”