The University of Kentucky Center for Visualization & Virtual Environments

Turning the Light on Building Design

Designing energy efficient buildings that are both functional and attractive raises the question of how do people adjust to a building that is adapting to them.

The Vis Center Unveils Two New Apps

We work all day, but what do we do? In this busy world, we always search for ways to increase productivity and reduce our workload. However, we cannot blindly accept that a new device, display, or training system makes a task easier to complete. Calculating cognitive workload is critical to functioning in a busy, multi-tasking world. For instance, raising the job performance of a surgeon or pilot can save lives. To assess cognitive workload, researchers use three gauges – physiological indicators, subjective workload, and cognitive reserve. Taking physiological measurements, such as skin conductance, often poses logistical roadblocks. Thus, researchers focus on collecting subjective workload, or what people think about a task’s workload, and cognitive reserve, or how much of a user’s effort remains for another task. The Vis Tools apps focus on assessing subjective workload and cognitive reserve. One app provides a convenient way to administer the NASA Task Load Index. The other app collects data about cognitive reserve by measuring a person’s perception of elapsed time while performing a task.

The Vis Tools: Tempo app in action.

Workload checks the effectiveness of training or display systems. Researchers at the Vis Center, Dr. Melody Carswell and Dr. Brent Seales, worked with Dr. Stephen Strup, the Chief of Urology at the University of Kentucky to test their research in the operating room. Laparoscopic surgery, a type of minimally invasive surgery, stresses surgeons because they cannot rely on tactile feedback and they see only a small sliver of the patient’s anatomy during surgery. However, surgeons operate in a high-stakes realm. The Vis Center’s STITCH project developed ways to make laparoscopy less stressful. For example, the team developed a dual display system with one display showing a computer-generated image of the target organ and the relative location of the surgical scope next to the surgeon’s camera scope view on the second screen. Researchers need to evaluate an innovation’s usefulness; does the innovation help a surgeon perform better on the job? To answer these questions, the researchers developed a STITCH toolkit, including the new apps, to give fellow researchers the opportunity to assess innovations for further research.

Vis Tools: TLX

Vis Tools: TLX moves the NASA Task Load Index (TLX) beyond the research lab to the field. Since the 1980s, the NASA TLX has been the most common tool to measure subjective workload. NASA originally designed it for aerospace system design and evaluation. However, the popularity of the TLX took off; googling “the NASA TLX” returns over 80,000 citations in multiple languages. Researchers use the TLX to compute the subjective workload of everything from performing surgery to using a GPS while driving. To quantify subjective workload, the NASA TLX combines six variables: mental demand, physical demand, temporal demand, frustration, effort, and performance. The Vis Tools: TLX app takes a great tool for researching human factors and makes it easy to use with an iPad or iPhone to collect data in the field and then export those data. Dr. Carswell, one of the project’s heads, explained, “You don’t have to be in a lab, and anybody can do it.” The Vis Tools: TLX app landed in the app store even before NASA made its own TLX app. NASA’s website says it will release an iPhone version in the near future.

Vis Tools: Tempo

Cognitive reserve is the difference between the workload of a primary task and a person’s total mental capability. Giving a person a second task to perform simultaneously with another action measures cognitive reserve. The success of performing the secondary task acts as a gauge for the pressures of the primary task. Vis Tools: Tempo measures an individual’s cognitive reserve. Armed with the Tempo app, researchers can give their subjects a pre-designed and easy to use secondary task. The secondary task is measuring time intervals without a clock. For example, a person estimates five second intervals and touches the iPad every five seconds. Vis Tempo prompts a person to give a time interval; the experimenter can change the length of the interval to fit best with the experiment. As primary task workload increases, we flounder more and more with the secondary task. Dr. Carswell wanted a secondary task not too distracting for subjects to perform in conjunction with a primary task. Researchers believed that as the primary task’s workload increased, the task distracts them and they give longer intervals. Imagine driving while talking on your phone. On a straight, empty road, you easily hold a conversation. However, calling somebody while in traffic and switching lanes on a treacherous road changes the story. The Tempo app functions in a similar way. As the primary task’s difficulty increases, the subject produces time intervals that vary more and more.

Vis Tools apps make conducting workload research easier. Quantifying a task’s difficulty allows researchers to demonstrate whether new interfaces ease workload. So, next time your teen is talking while driving, perhaps a user interface designed with Vis Tools will keep him safer. Vis Tools will help us multitask in our dizzying world.

The Vis Center dipped its toes in the App Store before, with the VisCenter app and the Imaging the Iliad app. Download the Vis Tools: TLX app and the Vis Tools: Tempo app for free on iTunes.

Read Dr. Carswell’s article here
This manuscript will be published in Ergonomics in Design: The Quarterly of Human Factors Applications, a journal of the Human Factors and Ergonomics Society (hfes.org).

ImageNet 3D Research

A group of army commanders sits in a conference room in Washington D.C. – a world away from a battlefield in Afghanistan. Despite the vast physical distance between soldiers in combat and these army commanders, the commanding officers can oversee military action using unmanned sensors. Although this seems like a movie scene, the ImageNet project at the Vis Center is making this a reality. Researchers from across the University collaborate on ImageNet, with the final goal of improving the Army’s ability to prepare for battle through enhanced data gathering, imaging, and display technologies. The project involves software engineering, networking, computer vision, aerial image gathering through remote-controlled planes, and graphics. ImageNet breaks down barriers between disciplines to create new technology that could never be developed in one area.

Two vehicles will gather data: a LIDAR truck and an unmanned aerial vehicle. For example, a LIDAR sensor scanning truck could drive through a residential neighborhood in Afghanistan. The LIDAR scanner will collect 3D points. The truck is also equipped with a “ladybug” camera that takes 360-degree photos. However, the truck’s camera cannot capture the view from above. Unmanned aerial vehicles (UAVs) will fly over the area taking aerial photographs. A control station will collect the data, apply algorithms to the data to extract meaning, and then pass the meaningful material to a remote control center. The algorithms will automatically extract information, such as any suspicious activity or changes in the area. ImageNet will show the commanders in the remote theater sifted information about the battle zone so that the commander will be informed and able to quickly deploy resources.

With the system gathering so much data, an automatic pipeline needs to reduce manual workload. For instance, a human could never pick through the millions of points in a point cloud. Making sense of the 3D data is Dr. Ruigang Yang’s job; he heads the 3D vision team at UK. Dr. Yang uses the 2D images and 3D point clouds to create 3D semantic models. The team focuses mainly on man-made buildings at the moment. With a semantic model, a computer identifies an object as a house, a wall, a roof, and so on. “That is a classic computer vision problem – how to get a computer to understand what it sees.” Computer scientists have chipped away at this dilemma for years with limited success, but Dr. Yang remains hopeful. Since the 3D vision portion of ImageNet focuses on a specific class of objects, man-made buildings, Dr. Yang believes they will succeed.

ImageNet has many applications, both in war and peacetime. If a commander deploys a sensor to take many photos of an area, the sensor takes so many photos that a commander could not look at each one. Researchers must develop a way to filter the inundation of information so officers focus on useful information. The plan uses computers to develop an automatic pipeline to process semantic information. The computer sorts information by knowing what the points show; for example, the LIDAR sensor creates a 3D point cloud of a house. The computer goes beyond recognizing the points’ locations. Instead, the computer analyzes the point cloud so it understands that the points form a house.

ImageNet is useful for meaningful situation awareness. For example, laser scanning recognizes tiny issues on highways such as small surface cracks or the roadway leaning towards the wrong side. The UK 3D team used the LIDAR truck to scan Nada Tunnel in the Red River Gorge. In the future, they will scan the tunnel again to see if the tunnel’s structure changed. ImageNet monitors a site that is difficult to scan, such as a tunnel, and catches structural changes early.
ImageNet expands the resources decision-makers use. An army commander would know everything of significance happening on a battlefield, the President would see the damage resulting from a natural disaster, and an urban planner would have an accurate model of a proposed development. Researchers at the Vis Center work on a wide variety of projects, but they all share one common goal: seeing the world in a new way. ImageNet will change the way we see our world through its delivery of significant information about a situation.

NOTICE: Some websites to which these materials provide links for the convenience of users are not managed by the University of Kentucky. The University does not review, control, or take responsibility for the contents of those sites. Equal Opportunity/Affirmative Action Employer.