The University of Kentucky Center for Visualization & Virtual Environments

Taking Google Street View Technology to the Next Level

For several years now, Google Street View has brought 360° photo “maps” into the homes of average citizens. With a few mouse clicks, internet users can drag “peg man” through the cities of the world, exploring a virtual world of photographs that have been sewn together. Google’s system is optimized to rapidly capture and provide a high-volume of 2D images.

But professionals such as emergency responders, military technicians, or urban planners need a higher-quality image augmented by three dimensional data. Researchers at the Vis Center believe that the synthesis of 3D point clouds and 2D photo imaging will provide the answers to that problem.

Currently there are two types of 3D mapping systems: aerial and stationary. Aerial systems are high speed, but their 3D point clouds tend to be very sparse (1 to 2 points per meter). Stationary systems capture more dense point clouds but work more slowly to gather that information.

The Vis Center

Led by Dr. Ruigang Yang, the Vis Center research team has assembled a mobile scanning system that overcomes both the speed and quality issues of previous methods. Dr. Yang’s system consists of two LIDAR (laser ranging) sensor heads, a GPS and inertial measurement unit, a spherical digital camera, and processing software. Currently, they are able to gather scans that are twenty times more dense than aerial systems while moving much faster than current stationary systems.

The LIDAR sensors mounted on top of the vehicle.

The LIDAR sensors mounted on top of the vehicle.

The Ladybug spherical digital camera.

The Ladybug spherical digital camera.

The process begins when a laptop inside the vehicle tells the Ladybug camera to take a picture. The Ladybug sends a “time stamp” trigger to the PCS computer that controls the Optech GPS/LIDAR system. The computer retrieves the time stamp from the Optech system and sends it to the laptop providing the time stamp for the Ladybug.

The LIDAR system gathers time, location, angle, intensity, and distance information for 200 points per second, while the camera is gathering photographic images. The photograph is then used to colorize the dense 3D point cloud generated by the LIDAR. This unique fusion of active scanning (LIDAR) with high-quality, high-volume passive scanning (photography) provides the rapid, high-quality scan that could be useful in many emergency situations. This technology also has potential application for geological and archaeological studies, construction, city planning, law enforcement, survey mapping, and 3D imaging.

But challenges still remain. In dense urban areas, the large buildings often interfere with the GPS signal, allowing for errors in the point cloud. The team is also working to stretch a photographic “skin” over the 3D point cloud, filling in the holes inherent to a 3D point cloud. Eventually, the hope is to create a system where users could even apply photos onto a 3D background.

The LIDAR truck in downtown Lexington, KY.

The LIDAR truck in downtown Lexington, KY.

Currently, they are working to create a cityscape database of scans for the city of Lexington, KY with scans gathered for a full twenty-four hour period. With this additional element of time, the scans become four dimensional, allowing for the simulation of day and night in the 3D colorized images. Once completed, the database will be shared to enable research beyond the fields of graphics and imaging. There will be possible applications for data compression, transmission, visualization, index and retrieval and computational geometry.

Dr. Ruigang Yang, Associate Professor of Computer Science, University of Kentucky

Dr. Ruigang Yang, Associate Professor of Computer Science, University of Kentucky

But for now, “we just focus on getting high quality models,” says Dr. Yang. “Getting the geometry correct is the first step to getting realistic 3D visualization.”

Multidisciplinary research at the Vis Center featured in the Chronicle for Higher Education

Dr. Brent Seales and Dr. Bill Endres are leading the effort to use 21st century imaging technology to preserve and learn more about an 8th century manuscript.

The Chronicle of Higher Education is featuring this month research conducted by faculty in the Vis Center. Bill Endres, a literary scholar, and Brent Seales, a computer scientist, from the University of Kentucky have brought 21st-century digital imaging to Lich­field, to study and help preserve one of its most ancient treasures: the eighth-century illustrated Latin manuscript known as the St. Chad Gospels. Read the article

Documentary produced by Vis Center begins airing on KET

The Media for Research Lab within the Vis Center and the Department of Mining Engineering in the UK College of Engineering worked together for over a year on a documentary to present a balanced picture of coal in Kentucky. They worked with the Cabinet for Energy and Environment, coal industry professions, the Sierra Club, Kentuckians for the Commonwealth, the Mountain Association for Community Economic Development (MACED) and others to pull together the content for the film, shooting video all over the state.

The goal of the film is to examine the significance of this history, what it means today and how Kentucky can move forward to responsibly mine coal while protecting the health, safety, and welfare of its citizens, the environment, and the economy for the future.

The film will air several times on the KET network:
11/23 9:00PM KET
11/24 10:00PM KET KY
11/25 3:00AM KET
11/26 9:00AM KET KY
11/29 5:00AM KET KY

NOTICE: Some websites to which these materials provide links for the convenience of users are not managed by the University of Kentucky. The University does not review, control, or take responsibility for the contents of those sites. Equal Opportunity/Affirmative Action Employer.