The Vis Center’s new innovative high-definition projection technology, originally developed for non-theatrical use, will be used for the first time in a theatrical setting for the UK production, followed by the Atlanta Opera production.
The technology was originally developed at the Vis Center through a partnership with Fort Knox. Its initial application was for the military with the goal of building rapidly deployable, high resolution screens to be used in training or battle. Other potential uses include any environment that needs the mobility and convenience of a display from schools to museums and medical applications.
While front and rear projected backdrops are nothing new to theatre, they can cause problems for the set design and for the performers. Normal front projectors can cast shadows and images onto the performers, and most rear projectors must be placed very far distances behind the screens to create a large enough image of scenery, which can limit the stage space. With the Vis Center’s new rear projection system, only four and a half feet separate the 54 projector units from their attached movable fabric screen units, which are an impressive 24×30′ and 24×15′.
The technology, coined by the Vis Center as SCRIBE (self-contained rapidly integratable background environment), utilizes a software system that blends the projections into one image, which will include still images and video related to the various scenes in the production.
This project grew out of the synergy that is possible through multi-disciplinary research collaboration. The Director of the Vis Center, Dr. Brent Seales came into contact with UK Opera Director, Everett McCorvey, through a chance meeting when they were both speaking at a luncheon hosted by Mrs. Patsy Todd. Both quickly grasped the possibilities of collaboration and over the next year the idea of using this technology as part of the opera production emerged.
Dr. Seales states that this type of multi-disciplinary research is the goal of the Vis Center. “We plan to see more of these type of real applications of our technology continue to take place as we work with other researchers across the University in the future. The possibilities are amazing if you consider what research can do when people step outside of their regular environments to interact with those with a distinctly different background.”
Bill Gregory, lead engineer for the Vis Center, reflected on the value of applying his technical ability to the theatre production, “It’s been fascinating to work with the theatre crew. Being an engineer I am focused on the practical results and never look at the artistic aspect while they didn’t realize the technology that could be used to achieve their artistic ends. We didn’t know what problems existed for them and they didn’t know what to ask for until we collaborated.”
The images will depict real locations in Charleston, SC and the islands off the coast of North Carolina that were taken and edited by the Vis Center team. Actual hurricane footage from The Weather Channel will be used as well. Combining these projected images with a minimal amount of three-dimensional pieces of scenery will create a vibrant and exciting production.
The use of this projection system has already been drawing interest from other opera and theatre companies from around the country.
Read more about the production:
For several years now, Google Street View has brought 360° photo “maps” into the homes of average citizens. With a few mouse clicks, internet users can drag “peg man” through the cities of the world, exploring a virtual world of photographs that have been sewn together. Google’s system is optimized to rapidly capture and provide a high-volume of 2D images.
But professionals such as emergency responders, military technicians, or urban planners need a higher-quality image augmented by three dimensional data. Researchers at the Vis Center believe that the synthesis of 3D point clouds and 2D photo imaging will provide the answers to that problem.
Currently there are two types of 3D mapping systems: aerial and stationary. Aerial systems are high speed, but their 3D point clouds tend to be very sparse (1 to 2 points per meter). Stationary systems capture more dense point clouds but work more slowly to gather that information.
Led by Dr. Ruigang Yang, the Vis Center research team has assembled a mobile scanning system that overcomes both the speed and quality issues of previous methods. Dr. Yang’s system consists of two LIDAR (laser ranging) sensor heads, a GPS and inertial measurement unit, a spherical digital camera, and processing software. Currently, they are able to gather scans that are twenty times more dense than aerial systems while moving much faster than current stationary systems.
The process begins when a laptop inside the vehicle tells the Ladybug camera to take a picture. The Ladybug sends a “time stamp” trigger to the PCS computer that controls the Optech GPS/LIDAR system. The computer retrieves the time stamp from the Optech system and sends it to the laptop providing the time stamp for the Ladybug.
The LIDAR system gathers time, location, angle, intensity, and distance information for 200 points per second, while the camera is gathering photographic images. The photograph is then used to colorize the dense 3D point cloud generated by the LIDAR. This unique fusion of active scanning (LIDAR) with high-quality, high-volume passive scanning (photography) provides the rapid, high-quality scan that could be useful in many emergency situations. This technology also has potential application for geological and archaeological studies, construction, city planning, law enforcement, survey mapping, and 3D imaging.
But challenges still remain. In dense urban areas, the large buildings often interfere with the GPS signal, allowing for errors in the point cloud. The team is also working to stretch a photographic “skin” over the 3D point cloud, filling in the holes inherent to a 3D point cloud. Eventually, the hope is to create a system where users could even apply photos onto a 3D background.
Currently, they are working to create a cityscape database of scans for the city of Lexington, KY with scans gathered for a full twenty-four hour period. With this additional element of time, the scans become four dimensional, allowing for the simulation of day and night in the 3D colorized images. Once completed, the database will be shared to enable research beyond the fields of graphics and imaging. There will be possible applications for data compression, transmission, visualization, index and retrieval and computational geometry.
But for now, “we just focus on getting high quality models,” says Dr. Yang. “Getting the geometry correct is the first step to getting realistic 3D visualization.”
Archaeologists have used digital photography to document ancient findings for many years now. But a group from the University of Kentucky Vis Center (Center for Visualization and Virtual Environments) is using Structured Light Illumination (SLI) to gather 3-dimensional data on such artifacts, allowing for scientific measurement.
In September 2010, Blazie Professor Dr. Larry Hassebrook, Bill Gregory and graduate student Eli Crane joined cave specialists and Transylvania University professor Dr. Christopher Begley in exploring a Missouri cave to capture 3D scans of human footprints, bear paw prints, and cave art, all believed to be from the mid-1400’s.
Carbon dating indicates that the cave was sealed off around 1435 A.D., perhaps by a cave-in. In 1985, the cave reopened by a natural sink hole. In order to insure the protection of the cave’s rare artifacts, the landowners granted generous access for scientific exploration and study on the condition that the cave’s location be kept secret.
The group entered the cave by rappelling in and then lowering equipment by rope. Dr. Hassebrook’s Vis Center team brought their extensive experience with SLI research and development, as well as their mobile SLI scanner which is battery operated for remote mixed resolution scanning without need for a generator. The team worked on two sites in the cave, one with human and bear prints, as well as in the ‘art gallery’ with the cave art.
The expedition was a complete success, collecting more than a dozen 3D scans of the prints and artwork. The scans produce a 3D point cloud with more than 2 million 3-Dimensional points as well as a 18 million pixel color image. These can be combined into one color 3-Dimensional scan. Adding the 3-Dimensional coordinates to every pixel allows for scientific measurement of the data.
Dr. Hassebrook has already used the SLI scanning technology in Honduras, Kentucky and Spain, as well as various laboratory scans. The mobile mixed resolution SLI scanner shows great potential for further data acquisition of in situ archaeological artifacts in remote or sensitive areas.