Multimedia Information Analysis (MIA) Laboratory

The mission of MIA Lab is to build efficient, robust and secure systems to analyze, process and communicate multimedia information. The director of the MIA Lab is Dr. Sen-ching Samson Cheung.

We are part of the Center for Visualization & Virtual Environments and Department of Computer and Electrical Engineering at the University of Kentucky. Our laboratory is located on the second floor of Davis Marksbury Building, a new $16.6M facility on campus, in full operation since December 2010. Approximately 65% of the facility is devoted to faculty labs. The remaining space is devoted to offices, demonstration space, conference rooms, and workshops.

The Mialab has access to over 20 windows and MAC workstations, many of them equipped with the latest GPUs for intensive computation. In addition, the Davis Marksbury Building also houses and supports two 32-node Linux clusters connected by a gigabit network. This cluster is the base for a number of research tools we have developed to support distributed computing over multi-media data, including distributed sensors (cameras, speakers, microphones). The Center also maintains two centralized redundant disk servers (5 terabytes) for data and media storage. Each researcher is provided with basic computing infrastructure, laboratory support for equipment, and software/hardware maintenance.

The Mialab maintains a myriad of sensing equipment including:

  1. A RGB camera network including 4 Point Grey Firefly cameras, 8 Unibrain Fire-i400 Color Industrial Cameras, 2 Axis 211W Network Cameras and 1 Vivotek PZ6114 Pan-tilt-zoom network camera.
  2. A GPS-synchronized RGB-depth-thermal camera network including 3 Swiss-Ranger SR-3000 miniature 3D ToF range cameras, 15 Microsoft Kinect cameras, 1 ICI Centurion Thermal Imager, and two HTC Vive Virtual Reality Systems.
  3. An embedded sensor network with 5 CITRIC visual sensor motes, 20 IEEE 802.15.4 TelosB motes, 10 EasySen SBT80 Multi-modality Sensor boards and 1 Q-track QT-400 Active RFID Tracking system.

In addition, we have access to several pieces of higher-cost infrastructure that is available at the Davis Marksbury Building. They include: Passive eye-tracking system (50k) with software and custom capture code for integration with particular experimental environments; FARO non-contact 3D probe with eight foot wingspan (80k); Vicon eight-camera motion capture system (60k); Eighty-projector, 40-megapixel display system (120k).

Real-time 3D scanning of our lab with five RGB-Depth cameras

  • Latest News

    Jan 23, 2018: Our collaborative paper with alum Hasan Sajid on counting people in dense crowd images have been accepted to TCSVT.

    Dec 19, 2017: Our paper on camera network (first author: Po-chang Su) has been accepted to Sensors.

    Nov 17, 2017: Congratulations to Po-chang for successfully defending his doctoral dissertation! Way to go, Dr. Su.

    Oct 24, 2017: Our paper on MeBook (first author: Nkiruka Uzuegbunam) has been accepted to IEEE Transactions on Learning Technologies.

    Oct 18, 2017: Dr. Cheung gave a keynote speech on Multimedia and Autism at IEEE Multimedia Signal Processing Workshop.

    Oct 4, 2017: Our paper on Wearable Visual Privacy (first author: Shaoqian Wang) has been accepted to IEEE Consumer Electronics Magazine.

    June 1, 2017: Our collaborative paper with alum Ying Luo on anonymous video surveillance system has been accepted to International Journal of Information Security.